Memory management iPhone OS Memory Warnings. What Do The Different Levels Mean?

Regarding the black art of managing memory on iPhone OS devices: what do the different levels of memory warning mean. Level 1? Level 2? Does the dial go to 11? Context: After an extensive memory stress testing period - including running my iPad app with the iPod music player app playing, I am inclined to ignore the random yet infrequent memory warnings I am receiving. My app never crashes. Ever. My app is leak free. And, well, the mems warnings just don't seem to matter. Thanks, Doug

Memory management big memory needs

recentry we had task to need big amount of memory (RAM) but have hot enought... but we have much free HDD space maybe exists tools (linux) which can handle process memory request (etc) and have limit RAM swap it into HDD (like 'nice' tool handle priority)?

Memory management Allocating more memory to an existing Global memory array

is it possible to add memory to a previously allocated array in global memory? what i need to do is this: //cudamalloc memory for d_A int n=0;int N=100; do { Kernel<<< , >>> (d_A,n++); //add N memory to d_A while(n!=5)} does doing another cudamalloc removes the values of the previously allocated array? in my case the values of the previous allocated array should be kept...

Memory management Requesting memory more than available

Suppose there is 50 MB to allocate and there is this for loop which allocates memory in each iteration. What happens when the following for loop runs. for(int i=0; i < 20; i++) { int *p = malloc(5MB); } I was asked this question in an interview. Can someone guide me on this and direct me to the necessary topics which I must learn in order to understand such situations.

Memory management Linux memory mapping

I got few questions about linux memory management(assume x86 32bit platform) By default for all processes the top 1Gig of virtual address is mapped to kernel area. Theoretically the Kernel can map additional memory from high memory using vmalloc. My question is what happens with the page tables of all the user processes , I assume that they should get updates about the kernel memory allocation?( may be that memory will get used when the kernel is in process context). Can someone explain fr

Memory management iOS 7 memory issues

I was recently trying to make my iOS 6 compatible app work with iOS 7 (noted still in beta). So i had my boss install iOS 7 on his iPhone 4S. We noticed that camera picker was slow and not responsive, and when we take a picture everything freezes. When i run the profile with memory allocations i notice that we have a really high memory usage : 160 MB. and btw we received a loot of memory warnings. So i tried running on the iPhone that still has ios6 and the maximum memory spike was 16 MB. Has an

Memory management Valgrind - program is crashing

I'm new to valgrind. While trying to check my small program, I'm getting this error: ==973== Process terminating with default action of signal 11 (SIGSEGV) ==973== Bad permissions for mapped region at address 0x57E ==973== at 0x57E: ??? ==973== by 0x400C3BA: _dl_signal_error (in /lib64/ld-2.4.so) ==973== by 0x400B82F: _dl_map_object_deps (in /lib64/ld-2.4.so) ==973== by 0x4002ED9: dl_main (in /lib64/ld-2.4.so) ==973== by 0x4011F81: _dl_sysdep_start (in /lib64/ld-2.4.so) ==973==

Memory management Windows CE: Mapping Physical memory in user mode

I need to access physical memory in user mode for a platform running Windows Embedded Compact 2013. I found a article which does that. The memory mapping is done in kernel mode driver and the address is returned to the user mode program. In the kernel driver BOOL BTN_IOControl(DWORD context, DWORD code, UCHAR *pInBuffer, DWORD inSize, UCHAR *pOutBuffer,DWORD outSize, DWORD *pOutSize) { PDWORD tValue = (PDWORD)pOutBuffer; switch(code) { case IOCTL_MAP_MEM

Memory management Pool of Memory in Kernel driver for Multiple processes

Suppose we want to maintain a pool of memory in a device driver or module. How can that pool be created and be available to multiple processes lets say 4 processes, accessing this driver/module. Assume 1 MB of memory in the pool. When I was reading LDD I came across api's mempool_create() but then there's also kmalloc. If someone has done such a thing kindly share the knowledge. My initial approach is to allocate using kmalloc() and then maintain start and end pointers in the private obje

Memory management Maximum number of addressable LED's to be controlled by arduino

I'm working on a project that involves building a big 'light wall' (h*w = 3,5 * 7 m). The wall is covered in semi-transparent fabric, and behind this is gonna be mounted strips of addressable RGB-LED's. The strips are gonna be mounted vertically and function as a big display for showing primitive graphics and visuals. I plan to control the diodes using an arduino. The reason for the choice of the arduino as controller is that is must be easily programmable, (relatively) cheap, and should work as

Memory management Process virtual address space and kernel address space? How?

I am very new to kernel or system programming, I have couple of questions related to virtual memory. Mostly related to static vs run time, [i.e. ELF and loading/Linking etc], Specific to linux-x86. My understandings might be completely wrong... I am aware of virtual memory and it's split 1G/3G. where process can not access address above PAGE_OFFSET in user mode - PAGE_OFFSET is virtual address. At Static time ELF defines process Virtual space? If ELF defines virtual address space then doe

Memory management Dataset does not fit in memory

I have an MNIST like dataset that does not fit in memory, (process memory, not gpu memory). My dataset is 4GB. This is not a TFLearn issue. As far as I know model.fit requires an array for x and y. TFLearn example: model.fit(x, y, n_epoch=10, validation_set=(val_x, val_y)) I was wondering is there's a way where we can pass a "batch iterator", instead of an array. Basically for each batch I would load the necessary data from disk. This way I would not run into process memory overflow errors

Memory management Does Rust free up the memory of overwritten variables?

I saw in the Rust book that you can define two different variables with the same name: let hello = "Hello"; let hello = "Goodbye"; println!("My variable hello contains: {}", hello); This prints out: My variable hello contains: Goodbye What happens with the first hello? Does it get freed up? How could I access it? I know it would be bad to name two variables the same, but if this happens by accident because I declare it 100 lines below it could be a real pain.

Memory management Is reusing a variable but memory not being released by the process considered a memory leak?

In Vala I have a TreeMultiMap from the Gee library created as a private variable of a class. When I use the tree multi map and fill it with data, the memory consumption of the process increases to 14.2 MiB. When I clear the tree multi map which is still the same variable and use it again, but add less data to it, the memory consumption of the process doesn't increase, but it doesn't decrease either. It stays at 14.2 MiB. The code is as follows MultiMapTest.vala using Gee; private TreeMultiMap

Memory management What is Dyon's Memory Model?

The Dyon Tutorial says it uses "lifetimes" rather than garbage collection or manual memory management. But how then does that lifetime model differ from Ownership in Rust? Dyon has a limited memory model because of the lack of a garbage collector. The language is designed to work around this limitation. - The Dyon Programming Language Tutorial How exactly is this model limited? Is there an example of memory managing code that Dyon could not run because of this limitation?

Memory management Advantage Database Server: in-memory queries

As far as I know, ADS v.10 tries to keep result of query in memory until it is a quite huge. The same should be true for the __output table and for temporary tables. When the result becoming large, swapping stated. The question is what memory limit is set for a query, a worker, whatever? Could this limit be configured? Thanks.

Memory management Using ALLOCATE and SegFault error

I am compiling some FORTRAN 90 code using gfortran compiler (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1 After compiling my code I was getting a Segmentation Fault error when I would attempt to run the program. After using Valgrind I was able to locate the problem in the below section of code. As can be seen a DEALLOCATE was being used without a preceding ALLOCATE. I should note that I am not the one who wrote this software and it was successfully compiled using f90 however I no longer have access

Memory management Call stack management is machine dependent?

I think I understand the basic of stack memory, but I still do not fully understand which is responsible for the mechanism for the way managing the stack - is it the compiler, the cpu architecture? is it programming language dependent? For example, I read that in ARM there is tendency to reduce the use of stack in function calls, so arguments to functions are usually passed through 4 registers. However, it seems to me that this can be implemented using general purpose registers in other cpu's a

Memory management memory fragmentation in cuda

I am having a memory allocation problem which I can't understand. I am trying to allocate a char array in GPU (I am guessing it is probably a memory fragmentation issue). here is my code, #include<stdio.h> #include<stdlib.h> #include<string.h> #include<cuda.h> inline void gpuAssert(cudaError_t code, char *file, int line, int abort=1) { if (code != cudaSuccess) { printf("GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line); if

Memory management laravel 4 artisan command memory issue

I wrote an artisan command to export exercises from a database, to standalone packages that can be used in an e-learning system like moodle, ... It is a huge amount of exercises, and after a while the memory gets exhausted. I tried to unset variables, activate the garbage collector, disabled the query log, and did some profiling, but till now with no success I attached my script below, with each exercise I process, the memory usage adds up with 300k Any idea's what I can do? use Illuminate

Memory management Understanding zram concepts in embedded system

I'm new to zram concept. Basically I'm understanding memory allocation for zram devices and usage in Embedded system. I Googled to find maximum size that can be assigned to disksize /sys/block/zram/disksize but its in vain. I have few basic doubts . The procedure to use zram is Basically suggested disksize is to use 25% of total RAM memory. Total RAM size is 512MB of my device. echo "134217728" > /sys/block/zram0/disksize mkswap /dev/block/zram0 swapon /dev/block/zram0 What is the

Memory management Node.js - Allocation failed - process out of memory even with use of JSONStream parser

I'm trying to read and parse through about 12 large (ranging from 100mb+ to 500mb+) JSON files in node. I tried using JSONStream (as suggested by many others as a solution for this problem) to prevent the following error: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory However, I'm still getting this error. This is the first time I've ever tried using a streaming file reader like this, so I'm not sure what could be the problem. Right now, the code I have is: for (v

Memory management Find my application memory foot print programmatically

I am trying to measure my application memory foot print pragmatically. I am using java.lang.management class to calculate this val heap = ManagementFactory.getMemoryMXBean.getHeapMemoryUsage val nonHeap = ManagementFactory.getMemoryMXBean.getNonHeapMemoryUsage val total = heap + nonHeap + (?) I assumed the sum of both will give me the total amount of memory used by application, but this is not the case, the actual size is greater which was provided by top command. So I am trying to understan

Memory management when to free a closure's memory in a lisp interpreter

I'm writing a simple lisp interpreter from scratch. I have a global environment that top level variables are bound in during evaluation of all the forms in a file. When all the forms in the file have been evaluated, the top level env and all of the key value data structs inside of it are freed. When the evaluator encounters a lambda form, it creates a PROC object that contains 3 things: a list of arguments to be bound in a local frame when the procedure is applied, the body of the function, an

Memory management Why is big Foreach loop getting slower?

I have an application built on windows form. With this application I am reading a text file of ten thousands of lines. I process individual rows in the file and save them to the database. While the first lines are very fast when processing, the processing time increases as the process continues. Looks like there's a memory increase. I don't know what is lacking. Waiting for your suggestions. These are the codes that I use: if (ofData.ShowDialog() == DialogResult.OK) {

Memory management Converting Cuda Array1 of Typename1 to Array2 of Typename2

Dear Cuda Scholars, Looking for solution for the below problem a) I have two arrays 1) array1 of size1 which is of typename1 2) array2 of size1 which is of typename2 b) I am wanting to write a kernel of the following prototype __global__ kernel(void* dest, void* src, int dest_sizeoftype, int src_sizeoftype, int num_array_elts); c) Supposing I create num_array_elts cuda threads, each threads copying its elt to from src to destination. Issue: a) The place I am getting stuck is which fun

Memory management Question about memory allocation in CUDA kernel

Hey there, I have an array with the size SIZE*sizeof(double) on my host. I allocate a device pointer of the size of the host-array and copy the array to the device. Now I pass this device array dev_point to my kernel function. Each threads need to modify some values of the passed array and than calculates another _device__-function with the new array-values (different for each thread). Now I wonder how to do this? Before I had a complete CPU-version (serial code) of my program and I simply alway

Memory management DXL Release memory after changing from edit mode to read mode?

I'm stuck in one DXL problem and really appreciate any helps. I have to create links in a lot of modules and therefore have to open them in edit mode. But that will use more than 2GB memory of DOORS, if I open them in edit mode at one time. So I decide to open each of them in edit mode to create links and then downgrade to read only mode. Howewer this way doesn't release memory either. Is there a way to release memory caused by edit mode? Thanks for any your helps.

Memory management PIC24F - Possible for data values to persist, even after the PIC is powered off?

I have a question regarding the persistence (storage) of data values in a PIC24F, even after the PIC has been turned off. I have read through the datasheet(s), but am confused regarding the difference of the EEPROM and Flash memory. For example, say I have a variable "x", is there a way for the value of "x" to persist even after the PIC has been shut off? I know programs can persist in the flash memory so long as the code is compiled in Stand Alone Operation (COE_OFF). However, I am specific

Memory management copying host memory to cuda __device__ variable

i've tried to find a solution to my problem using google but failed. there were a lot of snippets that didn't fit my case exactly, although i would think that it's a pretty standard situation. I'll have to transfer several different data arrays to cuda. all of them being simple struct arrays with dynamic size. since i don't want to put everything into the cuda kernel call, i thought, that __device__ variables should be exactly what i need. this is how i tried to copy my host data to the __devi

Memory management How does computer really request data in a computer?

I was wondering how exactly does a CPU request data in a computer. In a 32 Bits architecture, I thought that a computer would put a destination on the address bus and would receive 4 Bytes on the data bus. I recently read on the memory alignment in computer and it confused me. I read that the CPU has to read two times the memory to access a not multiple 4 address. Why is so? The address bus lets it access not multiple 4 address.

Memory management micro-programmed control circuit and one questions

I ran into a question: in digital system with micro-programmed control circuit, total of distinct operation pattern of 32 signal is 450. if the micro-programmed memory contains 1K micro instruction, by using Nano memory, how many bits is reduced from micro-programmed memory? 1) 22 Kbits 2) 23 Kbits 3) 450 Kbits 4) 450*32 Kbits I read in my notes, that (1) is true, but i couldn't understand how we get this? Edit: Micro instructions are stored in the micro memory (control memory). There i

Memory management Differentiation between integer and character

I have just started learning c++ and have come across various data types in c++. I also learnt how the computer stores values when the data type is specified . One doubt that occurred to me while learning char data types was how did the computer differentiate between integers and characters. I learnt that the character data type uses 8 bits to store a character and the computer can store a character in its memory location by following ASCII encoding rules. However, I didn't realise how the co

Memory management Understanding GPU Allocation on TF

I was doing the tutorial A Guide to TF Layers: Building a Convolutional Neural Network and got a Resource exhausted: OOM when allocating tensor with shape[10000,32,28,28] during evaluate phase. Since my GPU is a GeForce GTX 680 with only 2GB memory I'm not surprise however after played with batch size I'm a little confused how the GPU memory is managed. I was able to evaluate the model with a batch size 2500 however I'm confused about how many memory is used and how TF/python/numpy manag

Memory management Release UIWebView content from the memory, force the app to release the memory

I'm developing an app that use a lot of images, I'm using the UIWebView to represent about 200 image using JavaScript code (i'm using UIZE library), the problem is when i'm done with the UIWebView, i'm using the following code in the viewWillDisappear -(void)viewWillDisappear:(BOOL)animated { [[NSURLCache sharedURLCache] removeAllCachedResponses]; [webViews stringByEvaluatingJavaScriptFromString:@"document.open();document.close();"]; } with - (void)viewDidUnload { webViews = nil

Memory management Libgdx, Why Should I Use Constructors When Switching Screens?

I am a beginner in libgdx and was wondering in what cases you would need to use a constructor when switching screens (examples would be helpful). Is it to save memory? Also, is it better to create instances of all the screens in the main class that extends the game? Here is an example of instances from https://code.google.com/p/libgdx-users/wiki/ScreenAndGameClasses : public class MyGame extends Game { MainMenuScreen mainMenuScreen; AnotherScreen anotherScreen;

Memory management Difference between local allocatable and automatic arrays

I am interested in the difference between alloc_array and automatic_array in the following extract: subroutine mysub(n) integer, intent(in) :: n integer :: automatic_array(n) integer, allocatable :: alloc_array(:) allocate(alloc_array(n)) ...[code]... I am familiar enough with the basics of allocation (not so much on advanced techniques) to know that allocation allows you to change the size of the array in the middle of the code (as pointed out in this question), but I'm intere

Memory management Swap Space vs Backing Store

I am currently reading about memory management in my operating systems textbook and am curious if there is a difference between a swap space and a backing store. They both seem to do the same thing in general. From what I understand when a page fault occurs an inactive page is found and stored in the swap space so the page that caused the fault can be stored. A backing store seems to do the same thing except it's for an entire process not just a page. Is this the main difference between the

  1    2   3   4   5   6  ... 下一页 最后一页 共 8 页