Search Results

Search found 43200 results on 1728 pages for 'large object pattern'.

Page 127/1728 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Pointer to struct, containing pointer to an object, for which I want to call a function

    - by user1795609
    So I've created an ADT which is a singly linked list made up of nodes. These Nodes each have a pointer to an object in them called data. Class Structure { struct Node { Object *data; Node *next; }; }; Node *head; I am trying to call a function in the object, like this: head = new Node; head -> data = new Object(); head -> next = NULL; cout << head -> data.print(); I keep getting the following error at compile. error: request for member 'print' in 'head-Structure::Node::data', which is of non-class type 'Object'*

    Read the article

  • Large memory chunk not garbage collected

    - by Niels
    In a hunt for a memory-leak in my app I chased down a behaviour I can't understand. I allocate a large memory block, but it doesn't get garbage-collected resulting in a OOM, unless I explicit null the reference in onDestroy. In this example I have two almost identical activities that switch between each others. Both have a single button. On pressing the button MainActivity starts OOMActivity and OOMActivity returns by calling finish(). After pressing the buttons a few times, Android throws a OOMException. If i add the the onDestroy to OOMActivity and explicit null the reference to the memory chunk, I can see in the log that the memory is correctly freed. Why doesn't the memory get freed automatically without the nulling? MainActivity: package com.example.oom; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; public class MainActivity extends Activity implements OnClickListener { private int buttonId; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); System.gc(); Button OOMButton = new Button(this); OOMButton.setText("OOM"); buttonId = OOMButton.getId(); setContentView(OOMButton); OOMButton.setOnClickListener(this); } @Override public void onClick(View v) { if (v.getId() == buttonId) { Intent leakIntent = new Intent(this, OOMActivity.class); startActivity(leakIntent); } } } OOMActivity: public class OOMActivity extends Activity implements OnClickListener { private static final int WASTE_SIZE = 20000000; private byte[] waste; private int buttonId; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Button BackButton = new Button(this); BackButton.setText("Back"); buttonId = BackButton.getId(); setContentView(BackButton); BackButton.setOnClickListener(this); waste = new byte[WASTE_SIZE]; } public void onClick(View view) { if (view.getId() == buttonId) { finish(); } } }

    Read the article

  • Persistent Objects in ASP.NET

    - by user204588
    Hello, I'm trying to find the best way to persist an object or in use the same object at a later point in the code. So, I create an object, then you're redirected to another page(a form) that needs to use variables from that object. That form is submitted to a third party and there is stuff done on their end and then they request a page on my application that runs some more code and needs the objects variables again. I thought about Database but this is all done at once. This is done during a user checkout process and after it's over, there's no reason to retrieve this object again. So adding and retrieving from a database seems like it would be overkill and I think it would make the process slower. Right now I'm using Session but I keep hearing not to use that but no one is really saying why I shouldn't except it is bad practice. I can't really use post back values because the pages don't work that way. The checkout process starts off in a dll code that redirects to the form that is submitted to the third party and the a page is requested by the third party. So, I'm not really sure of the best way. What are all the options and what does everyone recommend as the best way?

    Read the article

  • Entity framework Update fails when object is linked to a missing child

    - by McKay
    I’m having trouble updating an objects child when the object has a reference to a nonexising child record. eg. Tables Car and CarColor have a relationship. Car.CarColorId CarColor.CarColorId If I load the car with its color record like so this var result = from x in database.Car.Include("CarColor") where x.CarId = 5 select x; I'll get back the Car object and it’s Color object. Now suppose that some time ago a CarColor had been deleted but the Car record in question still contains the CarColorId value. So when I run the query the Color object is null because the CarColor record didn’t exist. My problem here is that when I attach another Color object that does exist I get a Store update, insert error when saving. Car.Color = newColor Database.SaveChanges(); It’s like the context is trying to delete the nonexisting color. How can I get around this?

    Read the article

  • Create object of unknown class (two inherited classes)

    - by Paul
    I've got the following classes: class A { void commonFunction() = 0; } class Aa: public A { //Some stuff... } class Ab: public A { //Some stuff... } Depending on user input I want to create an object of either Aa or Ab. My imidiate thought was this: A object; if (/*Test*/) { Aa object; } else { Ab object; } But the compiler gives me: error: cannot declare variable ‘object’ to be of abstract type ‘A’ because the following virtual functions are pure within ‘A’: //The functions... Is there a good way to solve this?

    Read the article

  • Best method to cache objects in PHP?

    - by Martin Bean
    Hi, I'm currently developing a large site that handles user registrations. A social networking website for argument's sake. However, I've noticed a lag in page loads and deciphered that it is the creation of objects on pages that's slowing things down. For example, I have a Member object, that when instantiated with an ID passed as a construct parameter, it queries the database for that members' row in the members database table. Not bad, but this is created each time a page is loaded; and called more than once when say, calling an array of that particular members' friends, as a new Member object is created for each friend. So on a single page I can have upwards of seven of the same object, but containing different properties. What I'm wanting to do is to find a way to reduce the database load, and to allow persist objects between page loads. For example, the logged in user's object to be created on login (which I can do) but then stored somewhere for retrieval so I don't have to keep re-creating the object between page loads. What is the best solution for this? I've had a look at Memcache, but with it being a third-party module I can't have the web host install it on this occasion. What are my alternatives, and/or best practices in my case? Thanks in advance.

    Read the article

  • Is it possible to download a large database using mysql query

    - by Rose
    i am downloading files from server using WinSCP.Is it possible to write a query to download a large database using mysql query? Or using any other method i have tried with this code but i am not able to get the whole database structure <?php if(file_exists('backup_sql/my_backup.zip')) { unlink('backup_sql/my_backup.zip'); } $tables='*'; $host='MY HOST NAME'; $user='MY_USERNAME'; $pass='MYPASSWORD'; $name='MY_DB_NAME'; $link = mysql_connect($host,$user,$pass); mysql_select_db($name,$link); //get all of the tables if($tables == '*') { $tables = array(); $result = mysql_query('SHOW TABLES'); while($row = mysql_fetch_row($result)) { $tables[] = $row[0]; } } else { $tables = is_array($tables) ? $tables : explode(',',$tables); } $return=''; //cycle through foreach($tables as $table) { $result = mysql_query('SELECT * FROM '.$table); $num_fields = mysql_num_fields($result); //$return.= 'DROP TABLE '.$table.';'; $row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table)); $return.= "\n\n".$row2[1].";\n\n"; for ($i = 0; $i < $num_fields; $i++) { while($row = mysql_fetch_row($result)) { $return.= 'INSERT INTO '.$table.' VALUES('; for($j=0; $j<$num_fields; $j++) { $row[$j] = addslashes($row[$j]); //$row[$j] = ereg_replace("\n","\\n",$row[$j]); if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; } if ($j<($num_fields-1)) { $return.= ','; } } $return.= ");\n"; } } $return.="\n\n\n"; } $rand_var=time(); $files_to_zip = array( "'backup_sql/db-backup-'.$rand_var.'.sql'", ); $name = 'db-backup-'.$rand_var.'.sql'; $data = $return; ?> any one please help me... thank you

    Read the article

  • Flex - Use Variables for Object Attribute Names

    - by Immanuel
    How do you use variables to access Object attributes? Suppose I have an Object declared as follows, var obj:Object = new Object; obj.Name = "MyName"; obj.Age = "10"; How would i do something like this, var fieldName:String = "Name"; var fieldAge:String = "Age"; var Name_Age:String = obj.fieldName + " ," + obj.fieldAge; The code above treats 'fieldName' and 'fieldAge' as attribute name itself. I want to treat the same as a variable, and map the value associated with the variable as the Object attribute name.

    Read the article

  • ArrayList of my objects, indexOf problem

    - by majtits
    Hello! I have problem with Java's ArrayList. I've created an Object, that contains two atributes, x and y. Now I've loaded some object in my ArrayList. Problem is that I don't know how to find index of some object with x atribute I'm searching. Is there any way to do this?

    Read the article

  • Anyone help me! java basic question

    - by Max
    How can I get a specific value from an object? I'm trying to get a value of an instance for eg. ListOfPpl newListOfPpl = new ListOfPpl(id, name, age); Object item = newListOfPpl; How can I get a value of name from an Object item?? Even if it is easy or does not interest you can anyone help me??

    Read the article

  • GIT repository layout for server with multiple projects

    - by Paul Alexander
    One of the things I like about the way I have Subversion set up is that I can have a single main repository with multiple projects. When I want to work on a project I can check out just that project. Like this \main \ProductA \ProductB \Shared then svn checkout http://.../main/ProductA As a new user to git I want to explore a bit of best practice in the field before committing to a specific workflow. From what I've read so far, git stores everything in a single .git folder at the root of the project tree. So I could do one of two things. Set up a separate project for each Product. Set up a single massive project and store products in sub folders. There are dependencies between the products, so the single massive project seems appropriate. We'll be using a server where all the developers can share their code. I've already got this working over SSH & HTTP and that part I love. However, the repositories in SVN are already many GB in size so dragging around the entire repository on each machine seems like a bad idea - especially since we're billed for excessive network bandwidth. I'd imagine that the Linux kernel project repositories are equally large so there must be a proper way of handling this with Git but I just haven't figured it out yet. Are there any guidelines or best practices for working with very large multi-project repositories?

    Read the article

  • "Session is Closed!" - NHibernate

    - by Alexis Abril
    This is in a web application environment: An initial request is able to successfully complete, however any additional requests return a "Session is Closed" response from the NHibernate framework. I'm using a HttpModule approach with the following code: public class MyHttpModule : IHttpModule { public void Init(HttpApplication context) { context.EndRequest += ApplicationEndRequest; context.BeginRequest += ApplicationBeginRequest; } public void ApplicationBeginRequest(object sender, EventArgs e) { CurrentSessionContext.Bind(SessionFactory.Instance.OpenSession()); } public void ApplicationEndRequest(object sender, EventArgs e) { ISession currentSession = CurrentSessionContext.Unbind( SessionFactory.Instance); currentSession.Dispose(); } public void Dispose() { } } SessionFactory.Instance is my singleton implementation, using FluentNHibernate to return an ISessionFactory object. In my repository class, I attempt to use the following syntax: public class MyObjectRepository : IMyObjectRepository { public MyObject GetByID(int id) { using (ISession session = SessionFactory.Instance.GetCurrentSession()) return session.Get<MyObject>(id); } } This allows code in the application to be called as such: IMyObjectRepository repo = new MyObjectRepository(); MyObject obj = repo.GetByID(1); I have a suspicion my repository code is to blame, but I'm not 100% sure on the actual implementation I should be using. I found a similar issue on SO here. I too am using WebSessionContext in my implementation, however, no solution was provided other than writing a custom SessionManager. For simple CRUD operations, is a custom session provider required apart from the built in tools(ie WebSessionContext)?

    Read the article

  • Fast, Unicode-capable, cross-platform programmer's text editor that shows invisibles like ZWSP?

    - by Roger_S
    Our publishing workflow includes Windows and Linux machines (there are some Macs too, but not in the critical-path workflow). Many texts include both English and Khmer and are marked-up in XML. XML Copy Editor is the best cross-platform open-source XML editor I've discovered. It utilizes the Scintilla editing component, which is generally good with Unicode but which does not enable non-printing or invisible characters like U+200B (zero-width space) and U+200C (zero-width non-joiner) to be displayed. Khmer does not separate words with a space character as Western languages do, so ZWSP is used in electronic texts to enable applications to break lines easily. Ideally I'd edit the markup and the content in a single editor, but XML awareness is less important at times than being able to display invisibles. (OpenOffice.org Writer and Microsoft Word are the only two apps I know that will display ZWSP. They are not suitable for the markup and text manipulations that need to be done to prepare manuscripts for publication, unfortunately, although I guess they're fine for authoring.) I tried out a promising editor last week, but a search-and-replace regex operation that took under a second in TextPad 4.7.3 lasted over twenty seconds. So I want to mention that speed and the ability to handle large (up to 150mb) files is also a concern. Is there a good, fast, free or not too expensive text editor, with versions on Windows and Linux and maybe mac too, Unicode-aware and capable of displaying invisibles like ZWSP? That has syntax highlighting, can handle large files and is customizable enough that I won't tear my hair out in frustration? Thanks, Roger_S

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • Cannot copy files from external hard drive to desktop hard drive in Window 7

    - by Mohammad Reza Selim
    I'm trying to copy some old files from one of my external hard-drives to the hard drive of my desktop PC. Some files can not be copied but giving the error like 'Cannot read from source file or disk'. Those files are videos files (.DAT, .VOB, .MPG) and I watched them all the way through with no issues so the files aren't corrupted. I'm running Windows 7, with admin permissions. Could any one let me know the reason and a solution?

    Read the article

  • How to uncompress a 9GB file in Windows FAT32

    - by Kashif
    I have a 2GB RAR file that contains a 9GB video file. I'm using a FAT32 file system. Now I want to unzip that file but after 4GB I get an error due to the FAT32 file size limit. Now I want to know that how I can extract that video? I know that one way is to convert my partition to NTFS but I don't want to follow that way. I've also tried 7-zip but that again gives error after 4GB. One other way is to split that file but I don't know how I can split a video file that is zipped. So any idea please? How can I get rid of this problem.

    Read the article

  • Advise on a 240,000 sqft outdoor wireless network

    - by whlspacedude
    I would be very appreciative of some advice in the purchase of equipment to provide a wireless network that covers the entire area of an outdoor arena. The area is rectangular-ish in shape. 400ft wide and 600ft long. It has 6 light towers, 1 on each of the 400 foot ends and 2 on each of the 600 foot ends. I can mount on anything and spend as much money as needed. The needs of the network would be to provide access for, up to 15 wireless HD cameras with audio, and a public-wifi network. Can someone point me in the right direction as far as equipment and antennas ? I can provide any additional information that you may need.

    Read the article

  • 4TB HGST SATA drive only shows 1.62 TB in Windows Server 2012

    - by user136085
    I'm using a Supermicro X9SRE-3F motherboard with the latest BIOS and 2x 4TB drives connected to the on-board SATA controller. If I set the BIOS to RAID and create a RAID 1 array, the array shows up in the BIOS as 3.6TB. However when I boot Windows (on a separate RAID 1 array), the 4TB drives show up individually in disk manager as 2x 1.62TB drives. I could use Windows 2012 to set up software RAID 1, but when I set the BIOS back to 2x individual drives, they still show up in Windows as 2x 1.62TB drives. How do I access the full capacity of these drives? Thanks, Brian Bulaw

    Read the article

  • How to delete everything except .svn directories?

    - by Arek
    I have quite complex directory tree. There are many subdirectories, in those subdirectories beside other files and directories are ".svn" directories. Now, under linux I want to delete all files and directories except the .svn directories. I found many solutions about opposite behaviour - deleting all .svn directories in the tree. Can somebody quote me the correct answer for deleting everything except .svn?

    Read the article

  • Delete files from directory: memory exhausted

    - by codeholic
    This question is a logical continuation of http://serverfault.com/questions/45245/how-can-i-delete-all-files-from-a-directory-when-it-reports-argument-list-too-lo I have drwxr-xr-x 2 doreshkin doreshkin 198291456 Apr 6 21:35 session_data I tried find session_data -type f -delete find session_data -type f | xargs rm -f find session_data -maxdepth 1 -type f -print0 | xargs -r0 rm -f The result is the same: find: memory exhausted What can I do to remove this directory?

    Read the article

  • Does NTFS performance degrade significantly in volumes larger than five or six TB?

    - by Josh Yeager
    One of my customers is planning to set up a new document store, which will probably grow by 1-2TB per year. One of my co-workers says that Windows performance is extremely bad if it has a single NTFS volume that is bigger than five or six TB. He thinks that we need to set up their system with multiple volumes so that no single volume will exceed that limit. Is this a real problem? Does Windows or NTFS slow down when the volume size reaches several terabytes? Or is it possible to create a single volume of 10 or more TB?

    Read the article

  • How to burn a 8.5GB ISO?

    - by Vilx-
    I have an ISO image of a DVD which is 8.5GB in size. I find this strange, because that is about 500MB more than a standard DL DVD can hold. I tried overburning with Nero, but it failed. Is it possible to somehow burn such an image? Are there some special DVD blanks that allow you to write more? Or is this ISO simply made by some tool without any regards of whether it can be burned or not?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >