Search Results

Search found 33736 results on 1350 pages for 'project structure'.

Page 322/1350 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Is ACE reactor timer managment thread safe?

    - by idimba
    I have a module that manages timers in my aplication. This class has basibly three functions: Instance of ACE_Reactor is used internally by the module to manage the timers. schedule timer - calls ACE_Reactor::schedule_timer(). One of the arguments is a callback, called upon timer experation. cancel timer - calls ACE_Reactor::cancel_timer() The reactor executed in private timer of execution, so schedule/cancel and timeout callback are executed in different threads. ACE_Reactor::schedule_timer() receives a heap allocatec structure ( arg argument). This structure later deleted when canceling timer or when timeout handler is called. But since cancel and timeout handler are executed in different threads it looks like there's cases that the structure is deleted twice. Isn't it responsibility of reactor to ensure that timer is canceled when timeout handler is called?

    Read the article

  • Schemas and tables versus user-ids in a single table using PostgreSQL

    - by gvkv
    I'm developing a web app and I've come to a fork in the road with respect to database structure and I don't know which direction to take. I have a database with user information that I can structure one of two ways. The first is to create a schema and a set of tables for each user (duplicating the structure for each user) and the second is to create a single set of tables and query information based on user-id. Suppose 100000 users. Here are my questions: Considering security, performance, scalability and administration where does each choice lie? Would the answers change for 1000000 or 10000? Is there a set of best practices that lead to one choice or the other? It seems to me that multiple schemas are more secure since it's trivial to restrict user privileges but what about performance and scalability? Administration seems like a wash since dumping (and restoring) lots of schemas isn't any more difficult than dumping a few.

    Read the article

  • Exception_Record in python2.5 problem

    - by amir
    I'm using Python2.5 & the following code produce 2 errors. Can any body help me? class EXCEPTION_RECORD(Structure): _fields_ = [ ("ExceptionCode", DWORD), ("ExceptionFlags", DWORD), ("ExceptionRecord", POINTER(EXCEPTION_RECORD)), ("ExceptionAddress", LPVOID), ("NumberParameters", DWORD), ("ExceptionInformation", ULONG_PTR * EXCEPTION_MAXIMUM_PARAMETERS)] Python Error: Traceback (most recent call last): File "E:\Python25\my_debugger_defines.py", line 70, in <module> class EXCEPTION_RECORD(Structure): File "E:\Python25\my_debugger_defines.py", line 74, in EXCEPTION_RECORD ("ExceptionRecord", POINTER(EXCEPTION_RECORD)), NameError: name 'EXCEPTION_RECORD' is not defined Microsoft Document: The EXCEPTION_RECORD structure describes an exception. typedef struct _EXCEPTION_RECORD { // exr DWORD ExceptionCode; DWORD ExceptionFlags; struct _EXCEPTION_RECORD *ExceptionRecord; PVOID ExceptionAddress; DWORD NumberParameters; DWORD ExceptionInformation[EXCEPTION_MAXIMUM_PARAMETERS]; } EXCEPTION_RECORD; Thanks in advance

    Read the article

  • Next step for Python app using Sqlite db

    - by ChrisC
    I want to write a db program in Python using Sqlite. I have the db table structure planned, and am ready to move to the next step, which I think is to work any bugs out of the db structure. I am totally inexperienced in development except for writing the original db (written in MS Access), and an Intro to C++ class (OOP concepts and console C++ programs). Is it time to test the db structure? If so, what's the best way, and what tool(s) should I use? Thank you.

    Read the article

  • Maximum number of files one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the directory write to sub-directories such as ./a/b/c/abc.ext rather than just ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Alternative to c++ static virtual methods

    - by Jaime Pardos
    In C++ is not possible to declare a static virtual function, neither cast a non-static function to a C style function pointer. Now, I have a plain ol' C SDK that uses function pointers heavily. I have to fill a structure with several function pointers. I was planning to use an abstract class with a bunch of static pure virtual methods, and redefine them in derived classes and fill the structure with them. It wasn't until then that I realized that static virtual are not allowed in C++. Is there any good alternative? The best I can think of is defining some pure virtual methods GetFuncA(), GetFuncB(),... and some static members FuncA()/FuncB() in each derived class, which would be returned by the GetFuncX(). Then a function in the abstract class would call those functions to get the pointers and fill the structure.

    Read the article

  • Filtering and copying with PowerShell

    - by Bergius
    In my quest to improve my PowerShell skills, here's an example of an ugly solution to a simple problem. Any suggestions how to improve the oneliner are welcome. Mission: trim a huge icon library down to something a bit more manageable. The original directory structure looks like this: /Apps and Utilities /Compile /32 Bit Alpha png /Compile 16 n p.png /+ 10 or more files /+ 5 more formats with 10 or more files each /+ 20 or so icon names /+ 22 more categories I want to copy the 32 Bit Alpha pngs and flatten the directory structure a bit. Here's my quick and very dirty solution: $dest = mkdir c:\icons; gci -r | ? { $_.Name -eq '32 Bit Alph a png' } | % { mkdir ("$dest\" + $_.Parent.Parent.Name + "\" + $_.Parent.Name); $_ } | gci | % { cp $_. FullName -dest ("$dest\" + $_.Directory.Parent.Parent + "\" + $_.Directory.Parent) } Not nice, but it solved my problem. Resulting structure: /Apps and Utilities /Compile /Compile 16 n p.png /etc /etc /etc How would you do it?

    Read the article

  • implementing feature structures: what data type to use?

    - by Dervin Thunk
    Hello. In simplistic terms, a feature structure is an unordered list of attribute-value pairs. [number:sg, person:3 | _ ], which can be embedded: [cat:np, agr:[number:sg, person:3 | _ ] | _ ], can subindex stuff and share the value [number:[1], person:3 | _ ], where [1] is another feature structure (that is, it allows reentrancy). My question is: what data structure would people think this should be implemented with for later access to values, to perform unification between 2 fts, to "type" them, etc. There is a full book on this, but it's in lisp, which simplifies list handling. So, my choices are: a hash of lists, a list of lists, or a trie. What do people think about this?

    Read the article

  • Error when starting an activity

    - by Adam
    I'm starting an activity when a button is pressed, and normally (in other apps) haven't had an issue. But when I press the button in this app, I get an "unable to marshal value" error. Exact(ish) error from LogCat: 03-22 02:49:02.883: WARN/System.err(252): java.lang.RuntimeException: Parcel: unable to marshal value {CLASSNAME}@44dcf1b8 I feel that this might be related to the extra that I'm passing to the intent. I'm passing an ArrayList as a serializable to this new intent. My concern is that the data structure that the ArrayList contains isn't being serialized (as it's a personal data structure). Is the array list content data structure causing this? Something else that I'm missing?

    Read the article

  • Same table NHibernate mapping

    - by mircea .
    How can i go about defining a same table relation mapping (mappingbycode) using Nhibernate for instance let's say I have a class: public class Structure{ public int structureId; public string structureName; public Structure rootStructure; } that references the same class as rootStructure. mapper.Class<Structure>(m => { m.Lazy(true); m.Id(u => u.structureId, map => { map.Generator(Generators.Identity); }); m.Property(c => c.structureName); m.? // Same table mapping } ; Thanks

    Read the article

  • Can i Automap a tree heirarchy with fluent nhibernate?

    - by NakChak
    Is it possible to auto map a simple nested object structure? Something like this public class Employee : Entity { public Employee() { this.Manages = new List<Employee>(); } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual bool IsLineManager { get; set; } public virtual Employee Manager { get; set; } public virtual IList<Employee> Manages { get; set; } } Causes the following error at run time: Repeated column in mapping for collection: SharpKtulu.Core.Employee.Manages column: EmployeeFk Is it possible to automap this sort of structure, or do i have over ride the auto mapper for this sort of structure?

    Read the article

  • Handling Data Hierarchies in code

    - by Miau
    Hi there So, say I have a string to parse with a given format that maps to a tree like data structure. The string is kinda similar to a folder path, and the structure is similar to a file structure, except its got some rules so for something@cat1@otherSomething you would get /something/cat1/otherSomething for something@cat2@otherSomething you would get /something/cat2/otherSomething other examples /OtherThing/cat1/otherSomething/Blah /OtherThing/cat4/otherSomething Where something, cat1, otherSomethign, etc are some sort of instances of ICategory There are certain rules that control what subcategories are valid and which subcategories are not acceptable, at the moment I m considering a heavy Object hierachy, but I know this is not a flexible solution, I d prefer the categories to be a bit more general but again, since there are rules about what can go next I m not sure how to do this. An example of a rule can be: OtherThing can only have subcategories cat1 and cat4 ( anything else is invalid) An option would be to use some sort of convention based aproach to instantiate a particular class given a subsection of the string(like cat4) but it seems a bit too complex, I m all ears Thanks

    Read the article

  • LBA48 in Linux SCSI ATA Passthrough

    - by Ben Englert
    I am writing a custom disk monitoring/diagnostics app which, among other things, needs to do stuff to SATA disks behind a SAS PCI card under Linux. So far I am following this guide as well as the example code in sg_utils to pass ATA taskfiles through the SCSI layer. Seems to be working okay. However, in both cases, the CDB data structure (pointed to by the cmdp member of the sg_io argument to the ioctl) has only one unsigned char worth of space for the number of sectors. If you look at the ata_taskfile structure in linux\ata.h you'll see that it has an "nsect" and a "hob_nsect" field - high order bits for the sector count, to support LBA48. It turns out that in my application I need LBA48 support. So, anyone know how to set up an sg_io_hdr structure with an LBA48 sector count?

    Read the article

  • Can I Automap a tree hierarchy with Fluent NHibernate?

    - by NakChak
    Is it possible to auto map a simple nested object structure? Something like this: public class Employee : Entity { public Employee() { this.Manages = new List<Employee>(); } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual bool IsLineManager { get; set; } public virtual Employee Manager { get; set; } public virtual IList<Employee> Manages { get; set; } } It causes the following error at run time: Repeated column in mapping for collection: SharpKtulu.Core.Employee.Manages column: EmployeeFk Is it possible to automap this sort of structure, or do I have override the auto mapper for this sort of structure?

    Read the article

  • mongoDB many to many with one query?

    - by PowderKeg
    in mysql i use JOIN and one query is no problem. what about mongo? imagine categories and products. products may have more categories. categories may have more product. (many to many structure) and administrator may edit categories in administration (categories must be separated) its possible write product with categories names in one query? i used this structure categories { name:"categoryName", product_id:["4b5783300334000000000aa9","5783300334000000000aa943","6c6793300334001000000006"] } products { name:"productName", category_id:["4b5783300334000000000bb9","5783300334000000000bb943","6c6793300334001000000116"] } now i can simply get all product categories, and product in some category and categories alone for editation. but if i want write product with categories names i need two queries - one to get product categories id and second to get categories names from categories by that ids. is this the right way? or this structure is unsuitable? i would like to have only one query but i dont know if its possible.

    Read the article

  • Chess board position numbers in 6-rooted-binary tree?

    - by HH
    The maximum number of adjacent vertices is 6 that corresponds to the number of roots. By the term root, I mean the number of children for each node. If adjacent square is empty, fill it with Z-node. So every square will have 6 nodes. How can you formulate it with binary tree? Is the structure just 6-rooted-binary tree? What is the structure called if nodes change their positions? Suppose partially ordered list where its units store a large randomly expanding board. I want a self-adjusting data structure, where it is easy to calculate distances between nodes. What is its name?

    Read the article

  • sending a packet to multiple client at a time from udp socket

    - by mawia
    Hi! all, I was trying to write a udp server who send an instance of a file to multiple clients.Now suppose any how I manage to know the address of those client statically(for the sake of simplicity) and now I want to send this packet to these addresses.So how exactly I need to fill the sockaddr structure to contain the address of these clients.I am taking an array of sockaddr structure(to contain client address) and trying to send at each one them at a time.Now problem is to fill the individual sockaddr structure to contain the client address. I was trying to do something like that sa[1].sin_family = AF_INET; sa[1].sin_addr.s_addr = htonl(INADDR_ANY);//should'nt I replace this INADDR_ANY with client ip?? sa[1].sin_port = htons(50002); Correct me if this is not the correct way. All your help in this regard will be highly appreciated. With Thanks in advance, Mawia

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Right way to implement a n-to-m related

    - by ThreeFingerMark
    Hello, this is a part from my database structure: Table: Item Columns: ItemID, Title, Content, Price Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID Table: Image Columns: ImageID, Path, Size, UploadDate Table: ItemImage Columns: ItemID, ImageID The items can have more than one image so i have a extra table "Image" and map this images to an items. I see now a problem with this structure. Before i can add Images i must enter an item. My question is now. Is this structure a good way to solve my problem with many images / tags for one item? Thank you

    Read the article

  • Posting form using javascript and json with nested objects

    - by Tim
    I have an html form that has the following structure: <input type="text" name="title" /> <input type="text" name="persons[0].name" /> <input type="text" name="persons[0].color" /> <input type="text" name="persons[1].name" /> <input type="text" name="persons[1].color" /> I would like to serialize this into the following json: { "title":"titlecontent", "persons":[ { "name":"person0name", "color":"red" }, { "name":"person1name", "color":"blue" } ] } Notice that the number of persons can vary from case to case. The structure of the html form can change, but the structure of the submitted json can't. How is this done the easiest?

    Read the article

  • Comparing 2 linq applications: Unexpected result

    - by lukesky
    I drafted 2 ASP.NET applications using LINQ. One connects to MS SQL Server, another to some proprietary memory structure. Both applications work with tables of 3 int fields, having 500 000 records (the memory structure is identical to SQL Server table). The controls used are regular: GridView and ObjectDataSource. In the applications I calculate the average time needed for each paging click processing. LINQ + MS SQL application demands 0.1 sec per page change. LINQ + Memory Structure demands 0.8 sec per page change. This Is shocking result. Why the application handling data in memory works 8 times slower than the application using hard drive? Can anybody tell me why that happens?

    Read the article

  • Store data in an inconvenient table or create a derived table?

    - by user1705685
    I have a certain predefined database structure that I am stuck with. The question is whether this structure is OK for ORM or I whether should add a processing layer that would create a more convenient structure every time something is inserted into the original DB. To simplify, here's what it kind of looks like. I have a person table: PersonId Name And I have a properties table: PersonId PropertyType PropertyValue So, for person John Doe... (1, 'John Doe') ...I could have three properties: (1, 'phone', '555-55-55'), (1, 'email', '[email protected]), (1, 'type', 'employee') By using ORM I would like to get a "person" object that would have properties "name", "phone", "email", "type". Can Propel do that? How efficient is it? Is it a better idea to create a table with columns "phone", "email", "type" and fill it automatically as new rows are inserted into the properties table?

    Read the article

  • Python - a clean approach to this problem?

    - by Seafoid
    Hi, I am having trouble picking the best data structure for solving a problem. The problem is as below: I have a nested list of identity codes where the sublists are of varying length. li = [['abc', 'ghi', 'lmn'], ['kop'], ['hgi', 'ghy']] I have a file with two entries on each line; an identity code and a number. abc 2.93 ghi 3.87 lmn 5.96 Each sublist represents a cluster. I wish to select the i.d. from each sublist with the highest number associated with it, append that i.d. to a new list and ultimately write it to a new file. What data structure should the file with numbers be read in as? Also, how would you iterate over said data structure to return the i.d. with the highest number that matches the i.d. within a sublist? Thanks, S :-)

    Read the article

  • Purely functional equivalent of weakhashmap?

    - by Jon Harrop
    Weak hash tables like Java's weak hash map use weak references to track the collection of unreachable keys by the garbage collector and remove bindings with that key from the collection. Weak hash tables are typically used to implement indirections from one vertex or edge in a graph to another because they allow the garbage collector to collect unreachable portions of the graph. Is there a purely functional equivalent of this data structure? If not, how might one be created? This seems like an interesting challenge. The internal implementation cannot be pure because it must collect (i.e. mutate) the data structure in order to remove unreachable parts but I believe it could present a pure interface to the user, who could never observe the impurities because they only affect portions of the data structure that the user can, by definition, no longer reach.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >