Search Results

Search found 15401 results on 617 pages for 'memory optimization'.

Page 218/617 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • What Makes an Effective Search Engine Optimization Marketing Campaign?

    In the recent years, the Internet has increasingly become popular as a marketing tool. More so, it even gives traditional marketing and advertising channels a run for their money because of its ability to attract millions of customers. This fact can be attributed to the growing number of Internet users around the world who look for information online.

    Read the article

  • Do Search Engine Optimization Techniques Really Boost Your Network Traffic?

    As everyone knows internet has become the best place to get all the required information. With a simple search engine query you could get list of web pages where you could get the information you are looking out for. Also, it is a very common attitude of all of us that we would like to go with the websites which are listed in the top of the search results. So, it is very important for those websites involved in that business to get their web page listed in the top position of the search results so that they could make the users looking out for the related information to get into their website.

    Read the article

  • AS3 - Unloaded AVM1 swfs trace out as unloaded but memory is not freed for the AVM2 machine

    - by puppbits
    I have a large project built in as3. Part of its main functionality is to load and unload various as2 swfs. The problem is that the memory ins't free up once they are unloaded. I have access to the as2 swfs code base and destroyed all objects, stopped and killed timers, listeners, removed from stage, destroyed all the MovieClip.protoypes that were created. They look to be clean as far as the AS2 debugger show no remnants of the object after the destroy function is run. In AS3 i've closed the local connection, cleaned all references/listeners to the AVM1Movie and ran Loader.unloadAndStop(). The trace out in flex says the swf was unloaded but looking at windows task manager the memory usage never drops to when it was before the as2 swf was loaded. Each as2 swf can take up to 80 megs each time it's run so memory gets eaten up fast and loading and unloading a few as2 files. At this point if the AS2 swfs are unloaded the only thing that I can assume that could be left is MovieClip.prototype and/or _global, _root variables add during the AS2's run time. But i've gone through those and can't find anything else that might be sticking. Has anyone ever seen problems before with the AVM1 machine not freeing up its memory?

    Read the article

  • Does Ctypes Structures and POINTERS automatically free the memory when the Python object is deleted?

    - by jsbueno
    When using Python CTypes there are the Structures, that allow you to clone c-structures on the Python side, and the POINTERS objects that create a sofisticated Python Object from a memory address value and can be used to pass objects by reference back and forth C code. What I could not find on the documentation or elsewhere is what happens when a Python object containing a Structure class that was de-referenced from a returning pointer from C Code (that is - the C function alocated memory for the structure) is itself deleted. Is the memory for the original C structure freed? If not how to do it? Furthermore -- what if the Structure contains Pointers itself, to other data that was also allocated by the C function? Does the deletion of the Structure object frees the Pointers onits members? (I doubt so) Else - -how to do it? Trying to call the system "free" from Python for the Pointers in the Structure is crashing Python for me. In other words, I have this structure filled up by a c Function call: class PIX(ctypes.Structure): """Comments not generated """ _fields_ = [ ("w", ctypes.c_uint32), ("h", ctypes.c_uint32), ("d", ctypes.c_uint32), ("wpl", ctypes.c_uint32), ("refcount", ctypes.c_uint32), ("xres", ctypes.c_uint32), ("yres", ctypes.c_uint32), ("informat", ctypes.c_int32), ("text", ctypes.POINTER(ctypes.c_char)), ("colormap", ctypes.POINTER(PIXCOLORMAP)), ("data", ctypes.POINTER(ctypes.c_uint32)) ] And I want to free the memory it is using up from Python code.

    Read the article

  • how can I save/keep-in-sync an in-memory graph of objects with the database?

    - by Greg
    Question - What is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Background: That is say I have the classes Node and Relationship, and the application is building up a graph of related objects using these classes. There might be 1000 nodes with various relationships between them. The application needs to query the structure hence an in-memory approach is good for performance no doubt (e.g. traverse the graph from Node X to find the root parents) The graph does need to be persisted however into a database with tables NODES and RELATIONSHIPS. Therefore what is a good best practice approach for how can I save/keep-in-sync an jn-memory graph of objects with the database? Ideal requirements would include: build up changes in-memory and then 'save' afterwards (mandatory) when saving, apply updates to database in correct order to avoid hitting any database constraints (mandatory) keep persistence mechanism separate from model, for ease in changing persistence layer if needed, e.g. don't just wrap an ADO.net DataRow in the Node and Relationship classes (desirable) mechanism for doing optimistic locking (desirable) Or is the overhead of all this for a smallish application just not worth it and I should just hit the database each time for everything? (assuming the response times were acceptable) [would still like to avoid if not too much extra overhead to remain somewhat scalable re performance]

    Read the article

  • Correct way to switch between UIView with ARC. My way leads to memory leaks :( (iOS)

    - by Andrei Golubev
    i use xcode 4.4 with ARC on.. I have dynamically created UIViews in the ViewController.m: UIView*myviews[10]; Then in the - (void)viewDidLoad function i fill each of it with pictures i need myviews[viewIndex] = [[UIView alloc]initWithFrame:myrec]; UIImage *testImg; UIImageView * testImgView; testImg = [UIImage imageNamed:[NSString stringWithFormat:@"imgarray%d.png", viewIndex]; testImgView.image = testImg; viewindex++; So all seems to be fine, when i want to jump from one view to another i do with two buttons next: [self.view addSubview:views[viewIndex]]; CATransition *animation = [CATransition animation]; [animation setDelegate:self]; [animation setDuration:1.0f]; [animation setType:@"rippleEffect"]; [animation setSubtype:kCATransitionFromTop]; //[animation setTimingFunction:[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]]; [self.view.layer addAnimation:animation forKey:@"transitionViewAnimation"]; Nothing seems to be bad, but the memory consumption grows with huge speed when i switch between views.. and then i get low memory warning or sometimes application will just crash. I have tried to use UIViewController array and was switching between the controllers: nothing changes, the memory low warning is what i end up with.. Maybe i need to clean the memory somehow? But how? ARC does not allow to use release and so on.. last what i have tried (sorry, maybe not very professional) before to add new subview is this NSArray *viewsToRemove = [self.view subviews]; for (UIView *views in viewsToRemove) { [views removeFromSuperview]; } But this does not help either.. Please don't judge too strong, i am still new to iOS and Objective-c..

    Read the article

  • Good C++ array class for dealing with large arrays of data in a fast and memory efficient way?

    - by Shane MacLaughlin
    Following on from a previous question relating to heap usage restrictions, I'm looking for a good standard C++ class for dealing with big arrays of data in a way that is both memory efficient and speed efficient. I had been allocating the array using a single malloc/HealAlloc but after multiple trys using various calls, keep falling foul of heap fragmentation. So the conclusion I've come to, other than porting to 64 bit, is to use a mechanism that allows me to have a large array spanning multiple smaller memory fragments. I don't want an alloc per element as that is very memory inefficient, so the plan is to write a class that overrides the [] operator and select an appropriate element based on the index. Is there already a decent class out there to do this, or am I better off rolling my own? From my understanding, and some googling, a 32 bit Windows process should theoretically be able address up to 2GB. Now assuming I've 2GB installed, and various other processes and services are hogging about 400MB, how much usable memory do you think my program can reasonably expect to get from the heap? I'm currently using various flavours of Visual C++.

    Read the article

  • How does a hard drive compare to Flash memory working as a hard drive in terms of speed?

    - by Jian Lin
    Some experiment I did with hard drive read/write speed was 10MB/s write and 40MB/s read, and with a USB Flash drive, it can be 5MB/s write and 10MB/s read. Also, if I put a virtual hard drive .vhd file in a hard drive or in a USB Flash drive and try a Virtual Machine using it, the one using the hard drive is quite fast, while the one using the USB Flash drive is close to not usable. So I wonder some early netbooks use 4GB or 8GB flash memory as the hard drive, and even the Apple Mac Air has an option of using flash memory instead of a hard drive. But in those situation, will the speed be slower than using a hard drive, like in the case of a USB Flash drive?

    Read the article

  • How do I make kdump use a permissible range of memory for the crash kernel?

    - by Philip Durbin
    I've read the Red Hat Knowledgebase article "How do I configure kexec/kdump on Red Hat Enterprise Linux 5?" at http://kbase.redhat.com/faq/docs/DOC-6039 and http://prefetch.net/blog/index.php/2009/07/06/using-kdump-to-get-core-files-on-fedora-and-centos-hosts/ The crashkernel=128M@16M kernel parameter works fine for me in a RHEL 6.0 beta VM, but not on the RHEL 5.5 hosts I've tried. dmesg shows me: Memory for crash kernel (0x0 to 0x0) notwithin permissible range disabling kdump Here's the line from grub.conf: kernel /vmlinuz-2.6.18-194.3.1.el5 ro root=/dev/md2 console=ttyS0,115200 panic=15 rhgb quiet crashkernel=128M@16M How do I make kdump use a permissible range of memory for the crash kernel?

    Read the article

  • What solutions do I have to enforce memory limit on PHP server?

    - by Zulgrib
    I would like to enforce memory limit on a folder basis (and have it applied on subfolders) but I don't want the user able to change the memory limit. I know I can disable ini_set and I know I can enforce a hard limit or deny ini_set with Suhosin. With the first one, I doubt it will block changing it from the user.ini file, for the second, the user may still be able to change it to the hard limit I enforce with Suhosin. In both cases, I would prefer to not entierly block ini_set because it may have a legit use for other settings. In case it is important, I'm using PHP version 5.4.4 with nginx (PHP in FPM mode)

    Read the article

  • how much more memcache memory do i need to get 95% hit ratio? [on hold]

    - by OneSolitaryNoob
    I have a memcache instance running that has a 90% hit ratio. How can I estimate how much more memory it needs to get to a 95% hit ratio? edit: This question was blocked, but I do not think this is impossible to answer. After all, anyone that's used a caching system HAS answered this question, most likely with trial&error&luck. I can look at my usage patterns. I can increase or decrease memory and see how hit rate changes. Both of these provide data that informs an estimate. But what's a good/better/best way to do this?

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >