Search Results

Search found 10098 results on 404 pages for 'per pixel'.

Page 322/404 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • Reading Xml with XmlReader in C#

    - by Gloria Huang
    I'm trying to read the following Xml document as fast as I can and let additional classes manage the reading of each sub block. <ApplicationPool><Accounts><Account><NameOfKin></NameOfKin><StatementsAvailable><Statement></Statement></StatementsAvailable></Account></Accounts></ApplicationPool> I can't seem to format the above nicely :( However, I'm trying to use the XmlReader object to read each Account and subsequently the "StatementsAvailable". Do you suggest using XmlReader.Read and check each element and handle it? I've thought of seperating my classes to handle each node properly. So theres an AccountBase class that accepts a XmlReader instance that reads the NameOfKin and several other properties about the account. Then I was wanting to interate through the Statements and let another class fill itself out about the Statement (and subsequently add it to an IList). Thus far I have the "per class" part done by doing XmlReader.ReadElementString() but I can't workout how to tell the pointer to move to the StatementsAvailable element and let me iterate through them and let another class read each of those proeprties. Sounds easy!

    Read the article

  • Advice on optimzing speed for a Stored Procedure that uses Views

    - by Belliez
    Based on a previous question and with a lot of help from Damir Sudarevic (thanks) I have the following sql code which works great but is very slow. Can anyone suggest how I can speed this up and optimise for speed. I am now using SQL Server Express 2008 (not 2005 as per my original question). What this code does is retrieves parameters and their associated values from several tables and rotates the table in a form that can be easily compared. Its great for one of two rows of data but now I am testing with 100 rows and to run GetJobParameters takes over 7 minutes to complete? Any advice is gratefully accepted, thank you in advanced. /*********************************************************************************************** ** CREATE A VIEW (VIRTUAL TABLE) TO ALLOW EASIER RETREIVAL OF PARMETERS ************************************************************************************************/ CREATE VIEW dbo.vParameters AS SELECT m.MachineID AS [Machine ID] ,j.JobID AS [Job ID] ,p.ParamID AS [Param ID] ,t.ParamTypeID AS [Param Type ID] ,m.Name AS [Machine Name] ,j.Name AS [Job Name] ,t.Name AS [Param Type Name] ,t.JobDataType AS [Job DataType] ,x.Value AS [Measurement Value] ,x.Unit AS [Unit] ,y.Value AS [JobDataType] FROM dbo.Machines AS m JOIN dbo.JobFiles AS j ON j.MachineID = m.MachineID JOIN dbo.JobParams AS p ON p.JobFileID = j.JobID JOIN dbo.JobParamType AS t ON t.ParamTypeID = p.ParamTypeID LEFT JOIN dbo.JobMeasurement AS x ON x.ParamID = p.ParamID LEFT JOIN dbo.JobTrait AS y ON y.ParamID = p.ParamID GO -- Step 2 CREATE VIEW dbo.vJobValues AS SELECT [Job Name] ,[Param Type Name] ,COALESCE(cast([Measurement Value] AS varchar(50)), [JobDataType]) AS [Val] FROM dbo.vParameters GO /*********************************************************************************************** ** GET JOB PARMETERS FROM THE VIEW JUST CREATED ************************************************************************************************/ CREATE PROCEDURE GetJobParameters AS -- Step 3 DECLARE @Params TABLE ( id int IDENTITY (1,1) ,ParamName varchar(50) ); INSERT INTO @Params (ParamName) SELECT DISTINCT [Name] FROM dbo.JobParamType -- Step 4 DECLARE @qw TABLE( id int IDENTITY (1,1) , txt nchar(300) ) INSERT INTO @qw (txt) SELECT 'SELECT' UNION SELECT '[Job Name]' ; INSERT INTO @qw (txt) SELECT ',MAX(CASE [Param Type Name] WHEN ''' + ParamName + ''' THEN Val ELSE NULL END) AS [' + ParamName + ']' FROM @Params ORDER BY id; INSERT INTO @qw (txt) SELECT 'FROM dbo.vJobValues' UNION SELECT 'GROUP BY [Job Name]' UNION SELECT 'ORDER BY [Job Name]'; -- Step 5 --SELECT txt FROM @qw DECLARE @sql_output VARCHAR (MAX) SET @sql_output = '' -- NULL + '' = NULL, so we need to have a seed SELECT @sql_output = -- string to avoid losing the first line. COALESCE (@sql_output + txt + char (10), '') FROM @qw EXEC (@sql_output) GO

    Read the article

  • Jackson XML globally set element name for container types

    - by maxenglander
    I'm using Jackson 1.9.2 with the XML databind module. I need to tweak the way that Jackson serializes arrays, lists, collections. By default, with an int array property called myProperty containing a couple numbers, Jackson / XML is producing the following: <myProperty> <myProperty>1</myProperty> <myProperty>2</myProperty> </myProperty> What I need to produce is: <myProperty> <item>1</item> <item>2</item> </myProperty> I can do this on a per-POJO basis using a combination of JacksonXmlElementWrapper and JacksonXmlProperty like so: @JacksonXmlElementWrapper(localname='myProperty') @JacksonXmlProperty(localname='item') public int[] myProperty; This solution, however, would require that I manually apply these annotations to every array, list, collection in my POJOs. A much better solution would allow me to apply a solution once, globally, for all array, list, collection types. Any ideas on how to implement such a solution? Thanks!

    Read the article

  • how to compile with llvm and g++?

    - by Sriram
    Hi, I use a fedora-11 system and recently I installed llvm ( sudo yum -y install llvm llvm-docs llvm-devel ). When I search for llvm I get them in /usr/bin. some of the links to the binaries are broken(llvm-gcc,llvm-g++,llvm-cpp,etc.) the include files are found within /usr/include/llvm and libs at /usr/lib/llvm. How to compile them using g++? I tried to compile the kaleidoscope code given in the tutorial (http://llvm.org/docs/tutorial/LangImpl3.html) as per directed, but it fails to compile.. I get this... toy.cpp:5:30: error: llvm/LLVMContext.h: No such file or directory toy.cpp:352: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘virtual llvm::Value* NumberExprAST::Codegen()’: toy.cpp:358: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘virtual llvm::Value* BinaryExprAST::Codegen()’: toy.cpp:379: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp:379: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘llvm::Function* PrototypeAST::Codegen()’: toy.cpp:407: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp:407: error: ‘getGlobalContext’ was not declared in this scope toy.cpp:408: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp: In member function ‘llvm::Function* FunctionAST::Codegen()’: toy.cpp:454: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In function ‘int main()’: toy.cpp:543: error: ‘LLVMContext’ was not declared in this scope toy.cpp:543: error: ‘Context’ was not declared in this scope toy.cpp:543: error: ‘getGlobalContext’ was not declared in this scope I cannot find the LLVMContext.h file too. so i guess this might be a version problem. what should i do to make it work? some help would be good! thanks in advance... :)

    Read the article

  • a floating toolbar in WTL

    - by freefallr
    I've created a multimedia app that uses DirectShow to display multiple media streams simultaneously. The app is a WTL MDI application. For video windows, I use a CWindowImpl derived class - one per CChildFrame. I'd like to add controls to the video windows (volume ctrls etc). I'd initially thought about adding a slider (volume) control and a couple of buttons to a context menu - but later thought that this might not be the best approach. I was looking at MS Word 2007 - which has a floating toolbar that allows you to change options on highlighted text. I'd like to implement a similar floating toolbar for the video controls. I googled around a bit and found an old post about floating toolbars in WTL. The response was - for a floating toolbar, create a popup window and make it's parent the main window. I think that this sounds like a reasonable approach. my questions: Is this a good approach, or is there a more standard approach for a floating toolbar now in WTL? Should I make the toolbar a child of the video window or the CChildFrame that contains the video window, in order to ensure that it always remains on top of the video? How can I implement transparency in the floating toolbar, as in the floating toolbar in MS word?

    Read the article

  • How to estimate the contribution of an individual to a software project?

    - by Amit Kumar
    I work on a software project and would like to estimate the percentage out of the total contribution that I have put in the development of the software. Is there some tool doing this? Such a tool can be useful for appraisals or negotiations, for example. After all, we work for money (yes, not only money, put the point remains). I think there is enough hand-waving for the most important things. The estimation is very subjective (at least to me now) but I do not know of any tool that provides even a subjective estimate. I know of Sloccount that spells out the total effort using the lines of code but not on per-developer basis. My idea of an ideal tool for this purpose would: measure the complexity of the code (more complex is more effort, but more effort is not necessarily more contribution) measure the decomposibility/flexibility of the software (more decomposable is better) how much library code is used -- using library code speeds up the development process, increases the associated risk and requires the developer to know from before or learn about the library. be intelligent enough to differentiate between "who wrote the code", "who copied the code" and "who indented the code". It is difficult to differentiate between the complexity in the implementation and the intrinsic complexity of the problem. Perhaps a comparison can be made with an equivalent open source counterpart if there is, or for each submodule separately. If there is no such tool, is there no merit in having such a tool? Or do you believe in "I do work, I do not measure"? It takes time after all. Perhaps the project manager should do this estimation continuously, say, weekly. Are there any standards? Yes, standardization is difficult because every project has a different goal, but difficult does not mean it is not useful.

    Read the article

  • noSQL/SQL/RoR: Trying to build scalable ratings table for the game

    - by alexeypro
    I am trying to solve complex thing (as it looks to me). I have next entities: PLAYER (few of them, with names like "John", "Peter", etc.). Each has unique ID. For simplicity let's think it's their name. GAME (few of them, say named "Hide and Seek", "Jump and Run", etc.). Same - each has unique ID. For simplicity of the case let it be it's name for now. SCORE (it's numeric). So, how it works. Each PLAYER can play in multiple GAMES. He gets some SCORE in every GAME. I need to build rating table -- and not one! Table #1: most played GAMES Table #2: best PLAYERS in all games (say the total SCORE in every GAME). Table #3: best PLAYERS per GAME (by SCORE in particularly that GAME). I could be build something straight right away, but that will not work. I will have more than 10,000 players; and 15 games, which will grow for sure. Score can be as low as 0, and as high as 1,000,000 (not sure if higher is possible at this moment) for player in the game. So I really need some relative data. Any suggestions? I am planning to do it with SQL, but may be just using it for key-value storage; anything -- any ideas are welcome. Thank you!

    Read the article

  • Linux Kernel - Red/Black Trees

    - by CodeRanger
    I'm trying to implement a red/black tree in Linux per task_struct using code from linux/rbtree.h. I can get a red/black tree inserting properly in a standalone space in the kernel such as a module but when I try to get the same code to function with the rb_root declared in either task_struct or task_struct-files_struct, I get a SEGFAULT everytime I try an insert. Here's some code: In task_struct I create a rb_root struct for my tree (not a pointer). In init_task.h, macro INIT_TASK(tsk), I set this equal to RB_ROOT. To do an insert, I use this code: rb_insert(&(current-fd_tree), &rbnode); This is where the issue occurs. My insert command is the standard insert that is documented in all RBTree documentation for the kernel: int my_insert(struct rb_root *root, struct mytype *data) { struct rb_node **new = &(root->rb_node), *parent = NULL; /* Figure out where to put new node */ while (*new) { struct mytype *this = container_of(*new, struct mytype, node); int result = strcmp(data->keystring, this->keystring); parent = *new; if (result < 0) new = &((*new)->rb_left); else if (result > 0) new = &((*new)->rb_right); else return FALSE; } /* Add new node and rebalance tree. */ rb_link_node(&data->node, parent, new); rb_insert_color(&data->node, root); return TRUE; } Is there something I'm missing? Some reason this would work fine if I made a tree root outside of task_struct? If I make rb_root inside of a module this insert works fine. But once I put the actual tree root in the task_struct or even in the task_struct-files_struct, I get a SEGFAULT. Can a root node not be added in these structs? Any tips are greatly appreciated. I've tried nearly everything I can think of.

    Read the article

  • Read large file into sqlite table in objective-C on iPhone

    - by James Testa
    I have a 2 MB file, not too large, that I'd like to put into an sqlite database so that I can search it. There are about 30K entries that are in CSV format, with six fields per line. My understanding is that sqlite on the iPhone can handle a database of this size. I have taken a few approaches but they have all been slow 30 s. I've tried: 1) Using C code to read the file and parse the fields into arrays. 2) Using the following Objective-C code to parse the file and put it into directly into the sqlite database: NSString *file_text = [NSString stringWithContentsOfFile: filePath usedEncoding: NULL error: NULL]; NSArray *lineArray = [file_text componentsSeparatedByString:@"\n"]; for(int k = 0; k < [lineArray count]; k++){ NSArray *parts = [[lineArray objectAtIndex:k] componentsSeparatedByString: @","]; NSString *field0 = [parts objectAtIndex:0]; NSString *field2 = [parts objectAtIndex:2]; NSString *field3 = [parts objectAtIndex:3]; NSString *loadSQLi = [[NSString alloc] initWithFormat: @"INSERT INTO TABLE (TABLE, FIELD0, FIELD2, FIELD3) VALUES ('%@', '%@', '%@');",field0, field2, field3]; if (sqlite3_exec (db_table, [loadSQLi UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { sqlite3_close(db_table); NSAssert1(0, @"Error loading table: %s", errorMsg); } Am I missing something? Does anyone know of a fast way to get the file into a database? Or is it possible to translate the file into a sqlite format that can be read directly into sqlite? Or should I turn the file into a plist and load it into a Dictionary? Unfortunately I need to search on two of the fields, and I think a Dictionary can only have one key? Jim

    Read the article

  • How are you taking advantage of Multicore?

    - by tgamblin
    As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few. My questions are: How does this affect your software roadmap? I'm particularly interested in real stories about how multicore is affecting different software domains, so specify what kind of development you do in your answer (e.g. server side, client-side apps, scientific computing, etc). What are you doing with your existing code to take advantage of multicore machines, and what challenges have you faced? Are you using OpenMP, Erlang, Haskell, CUDA, TBB, UPC or something else? What do you plan to do as concurrency levels continue to increase, and how will you deal with hundreds or thousands of cores? If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too. Finally, I've framed this as a multicore question, but feel free to talk about other types of parallel computing. If you're porting part of your app to use MapReduce, or if MPI on large clusters is the paradigm for you, then definitely mention that, too. Update: If you do answer #5, mention whether you think things will change if there get to be more cores (100, 1000, etc) than you can feed with available memory bandwidth (seeing as how bandwidth is getting smaller and smaller per core). Can you still use the remaining cores for your application?

    Read the article

  • How can I convert this PHP script to Ruby? (build tree from tabbed string)

    - by Jon Sunrays
    I found this script below online, and I'm wondering how I can do the same thing with a Ruby on Rails setup. So, first off, I ran this command: rails g model Node node_id:integer title:string Given this set up, how can I make a tree from a tabbed string like the following? <?php // Make sure to have "Academia" be root node with nodeID of 1 $data = " Social sciences Anthropology Biological anthropology Forensic anthropology Gene-culture coevolution Human behavioral ecology Human evolution Medical anthropology Paleoanthropology Population genetics Primatology Anthropological linguistics Synchronic linguistics (or Descriptive linguistics) Diachronic linguistics (or Historical linguistics) Ethnolinguistics Sociolinguistics Cultural anthropology Anthropology of religion Economic anthropology Ethnography Ethnohistory Ethnology Ethnomusicology Folklore Mythology Political anthropology Psychological anthropology Archaeology ...(goes on for a long time) "; //echo "Checkpoint 2\n"; $lines = preg_split("/\n/", $data); $parentids = array(0 => null); $db = new PDO("host", 'username', 'pass'); $sql = 'INSERT INTO `TreeNode` SET ParentID = ?, Title = ?'; $stmt = $db->prepare($sql); foreach ($lines as $line) { if (!preg_match('/^([\s]*)(.*)$/', $line, $m)) { continue; } $spaces = strlen($m[1]); //$level = intval($spaces / 4); //assumes four spaces per indent $level = strlen($m[1]); // if data is tab indented $title = $m[2]; $parentid = ($level > 0 ? $parentids[$level - 1] : 1); //All "roots" are children of "Academia" which has an ID of "1"; $rv = $stmt->execute(array($parentid, $title)); $parentids[$level] = $db->lastInsertId(); echo "inserted $parentid - " . $parentid . " title: " . $title . "\n"; } ?>

    Read the article

  • C# .Net Serial DataReceived Event response too slow for high-rate data.

    - by Matthew
    Hi, I have set up a SerialDataReceivedEventHandler, with a forms based program in VS2008 express. My serial port is set up as follows: 115200, 8N1 Dtr and Rts enabled ReceivedBytesThreshold = 1 I have a device I am interfacing with over a BlueTooth, USB to Serial. Hyper terminal receives the data just fine at any data rate. The data is sent regularly in 22 byte long packets. This device has an adjustable rate at which data is sent. At low data rates, 10-20Hz, the code below works great, no problems. However, when I increase the data rate past 25Hz, there starts to recieve mulitple packets on one call. What I mean by this is that there should be a event trigger for every incoming packet. With higher output rates, I have tested the buffer size (BytesToRead command) immediatly when the event is called and there are multiple packets in the buffer then. I think that the event fires slowly and by the time it reaches the code, more packes have hit the buffer. One test I do is see how many time the event is trigger per second. At 10Hz, I get 10 event triggers, awesome. At 100Hz, I get something like 40 event triggers, not good. My goal for data rate is 100HZ is acceptable, 200Hz preferred, and 300Hz optimum. This should work because even at 300Hz, that is only 52800bps, less than half of the set 115200 baud rate. Anything I am over looking? public Form1() { InitializeComponent(); serialPort1.DataReceived += new SerialDataReceivedEventHandler(serialPort1_DataReceived); } private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e) { this.Invoke(new EventHandler(Display_Results)); } private void Display_Results(object s, EventArgs e) { serialPort1.Read(IMU, 0, serial_Port1.BytesToRead); }

    Read the article

  • Objective-C Protocols within Protocols

    - by LucasTizma
    I recently began trying my hand at using protocols in my Objective-C development as an (obvious) means of delegating tasks more appropriately among my classes. I completely understand the basic notion of protocols and how they work. However, I came across a roadblock when trying to create a custom protocol that in turn implements another protocol. I since discovered the solution, but I am curious why the following DOES NOT work: @protocol STPickerViewDelegate < UIPickerViewDelegate > - ( void )customCallback; @end @interface STPickerView : UIPickerView { id < STPickerViewDelegate > delegate; } @property ( nonatomic, assign ) id < STPickerViewDelegate > delegate; @end Then in a view controller, which conforms to STPickerViewDelegate: STPickerView * pickerView = [ [ STPickerView alloc ] init ]; pickerView.delegate = self; - ( void )customCallback { ... } - ( NSString * )pickerView:( UIPickerView * )pickerView titleForRow:( NSInteger )row forComponent:( NSInteger )component { ... } The problem was that pickerView:titleForRow:forComponent: was never being called. On the other hand, customCallback was being called just fine, which isn't too surprising. I don't understand why STPickerViewDelegate, which itself conforms to UIPickerViewDelegate, does not notify my view controller when events from UIPickerViewDelegate are supposed to occur. Per my understanding of Apple's documentation, if a protocol (A) itself conforms to another protocol (B), then a class (C) that conforms to the first protocol (A) must also conform to the second protocol (B), which is exactly the behavior I want and expected. What I ended up doing was removing the id< STPickerViewDelegate > delegate property from STViewPicker and instead doing something like the following in my STViewPicker implementation where I want to evoke customCallback: if ( [ self.delegate respondsToSelector:@selector( customCallback ) ] ) { [ self.delegate performSelector:@selector( customCallback ) ]; } This works just fine, but I really am puzzled as to why my original approach did not work.

    Read the article

  • Reduce number of config files to as few as possible

    - by Scott
    For most of my applications I use iBatis.Net for database access/modeling and log4Net for logging. In doing this, I need a number of *.config files for each project. For example, for a simple application I need to have the following *.config files: app.config ([AssemblyName].[Extention].config) [AssemblyName].SqlMap.config [AssemblyName].log4Net.config [AssemblyName].SqlMapProperties.config providers.config When these applications go from DEV to TEST to PRODUCTION environments, the settings contained in these files change depending on the environment. When the number of files get compounded by having 5-10 (or more) supporting executables per project, the work load on the infrastructure team (the ones doing the roll-outs to the different environments) gets rather high. We also have a high risk of one of the config files being missed, or a mistype in the config file. What is the best way to avoid these risks? Should I combine all of the config files into one file? (is that possible with iBatis?) I know that with VisualStudio 2010 they introduce transforms for these config files that allow the developer to setup all the settings for the different environments and then dynamically (depending on the build kicked off) the config files get updated to the correct versions. (VS 2010 - transforms) Thank you for any help that you can provide.

    Read the article

  • Data Transfer Objects VS Domain/ActiveRecord Entities in the View in RoR

    - by leypascua
    I'm coming from a .NET background, where it is a practice to not bind domain/entity models directly to the view in not-so-basic CRUD-ish applications where the view does not directly project entity fields as-is. I'm wondering what's the practice in RoR, where the default persistence mechanism is ActiveRecord. I would assert that presentation-related info should not be leaked to the entities, not sure though if this is how real RoR heads would do it. If DTOs/model per view is the approach, how will you do it in Rails? Your thoughts? EDIT: Some examples: - A view shows a list of invoices, with the number of unique items in one column. - A list of credit card accounts, where possibly fraudulent transactions were executed. For that, the UI needs to show this row in red. For both scenarios, The lists don't show all of the fields of the entities, just a few to show in the list (like invoice #, transaction date, name of the account, the amount of the transaction) For the invoice example, The invoice entity doesn't have a field "No. of line items" mapped on it. The database has not been denormalized for perf reasons and it will be computed during query time using aggregate functions. For the credit card accounts example, surely the card transaction entity doesn't have a "Show-in-red" or "IsFraudulent" invariant. Yes it may be a business rule, but for this example, that is a presentation concern, so I would like to keep it out of my domain model.

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • HTTP Basic authentication using Authlogic or authenticate_or_request_with_http_basic for API call?

    - by Gaius Parx
    I have a Rails 2.3.x app that implements the act_as_authentic in User model and a UserSession model as per Authlogic Github example. I am implementing an API to allow access from iPhone. Will be using HTTP Basic authentication via https (will not implement single access token). Each API call requires a username/password for the access. I am able to access the API by calling http://username:password@localhost:3000/books.xml for example. Authlogic will not persist if using the single access token. But I am using HTTP Basic which I think Authlogic will create session for the API calls, which is not used for my API methods. So for each API call I made, new session object is created. Thus appear to me that this would load up the server resource pretty quickly. Sounds like a bad idea. The alternative is to use the Rails authenticate_or_request_with_http_basic for API controllers. Example adding a before_filter: def require_http_auth_user authenticate_or_request_with_http_basic do |username, password| if @current_user = User.find_by_email(username) @current_user.valid_password?(password) else false end end end This will bypass the Authlogic UserSession and just use the User model. But this will involve using separate authentication codes in the app. Anyone has any comments and can share their experience? Thanks

    Read the article

  • SQLite self-join performance

    - by Derk
    What I essentially want, is to retreive all features and values of products which have a particular feature and value. For example: I want to know all available hard drive sizes of products that have an Intel processor. I have three tables: product_to_value (product_id, feature_id, value_id) features (id, value) // for example Processor family, Storage size, etc. values (id, value) // for example Intel, 60GB, etc The simplified query I have now: SELECT features.name, featurevalues.name, featurevalues.value FROM products, products as prod2, features, features as feat2, values, values as val2 WHERE products.feature = features.id AND products.value = values.id AND products.product = prod2.product AND prod2.feature_id = feat2.id AND prod2.value_id = val2.id AND features.id = ? AND feat2.id = ? All columns have an index. I am using SQLite. The problem is that it's very slow (70ms per query, without the self-join it's <1ms). Is there a smarter way to fetch data like this? Or is this too much to ask from SQLite? I personally think I am simply overlooking something, as I am quite new to SQLite.

    Read the article

  • C++: Check istream has non-space, non-tab, non-newline characters left without extracting chars

    - by KRao
    I am reading a std::istream and I need to verify without extracting characters that: 1) The stream is not "empty", i.e. that trying to read a char will not result in an fail state (solved by using peek() member function and checking fail state, then setting back to original state) 2) That among the characters left there is at least one which is not a space, a tab or a newline char. The reason for this is, is that I am reading text files containing say one int per line, and sometimes there may be extra spaces / new-lines at the end of the file and this causes issues when I try get back the data from the file to a vector of int. A peek(int n) would probably do what I need but I am stuck with its implementation. I know I could just read istream like: while (myInt << myIstream) {...} //Will fail when I am at the end but the same check would fail for a number of different conditions (say I have something which is not an int on some line) and being able to differentiate between the two reading errors (unexpected thing, nothing left) would help me to write more robust code, as I could write: while (something_left(myIstream)) { myInt << myIstream; if (myStream.fail()) {...} //Horrible things happened } Thank you!

    Read the article

  • Auto switching databases from a rails app gracefully from the ApplicationController?

    - by Zaqintosh
    I've seen this post a few times, but haven't really found the answer to this specific question. I'd like to run a rails application that based on the detected request.host (imagine I have two subdomains points to the same rails app and server ip address: myapp1.domain.com and myapp2.domain.com). I'm trying to have myapp1 use the default "production" database, and myapp2 requests always use the alternative remote database. Here is an example of what I tried to do in Application controller that did not work: class ApplicationController < ActionController::Base helper :all before_filter :use_alternate_db private def use_alternate_db if request.host == 'myapp1.domain.com' regular_db elsif request.host == 'myapp2.domain.com' alternate_db end end def regular_db ActiveRecord::Base.establish_connection :production end def alternate_db ActiveRecord::Base.establish_connection( :adapter => 'mysql', :host => '...', :username => '...', :password => '...', :database => 'alternatedb' ) end end The problem is when it switches databases using this method, all connections (including valid sessions across the different subdomains get interrupted...). All examples online have people controlling database connectivity at the model level, but this would involve adding code all over my application. Is there some way to globally switch database connections on a per-request basis in the manner I'm suggesting above WITHOUT having to inject code all over my application? The added complexity here is I'm using Heroku as a hosting provider, so I have no control at the apache / rails application server level. I have looked at solutions like dbcharmer and magicmodels, but none seem to show examples of doing it in the manner that I'm trying to. Thanks for any help!

    Read the article

  • Managing Cisco programatically; Telnet vs SNMP?

    - by MikeHerrera
    I was recently approached by a network-engineer, co-worker who would like to offload his minor network admin duties to a junior-level helpdesk tech. The specific location in need of management acts as an ISP for tenants on its single-site property, so there's a lot of small adjustments being made on a daily basis. I am thinking it would be helpful to write him a winform app to manage the 32 Cisco devices, on-site. I'd like to initially provide functionality which could modify access control lists, port VLAN assignments, and bandwidth limitations per VLAN... adding more to the list as its deemed valuable. My initial thought was to emulate a telnet session with the network device; utilizing my network-engineer's familiarity with the command-line / IOS interaction. Minimal time would be required to learn Cisco IOS conventions, myself. Though while searching for solutions, it appears that most people favor SNMP. That, or, their specific circumstances pushed them in the direction of SNMP. I wanted to know if I've overlooked an obvious benefit of SNMP. Should I be using SNMP? Why or why not?

    Read the article

  • Intern working for Indian NGO - Help with PHP 4, advising staff

    - by Kevin Burke
    Hello, For the past three months I've been working for an Indian NGO (http://sevamandir.org), doing some volunteer work in the field but also trying to improve their website, which needs a ton of work. Recently I've been trying to fix the "subscribe to newsletter" button, which is broken. I used filter_var to filter the email input, but when I tried to test this out I got an error. Then I learned that the web host is still using php version 4.3.2 and register_globals is turned on. I've mentioned that they should upgrade their web host before (they are paying around $50 per year for Rediff Web Hosting, complete with 100MB storage and 1 MySQL database). That would add a lot of complexity for the IT staff of 3, who would have to update everyone's email information (I assume? this is a 250-person organization), and have me find a new web host and teach them about it. The staff isn't that sophisticated about web usage - the head guy still uses IE6, and the website's laid out in tables (they use Dreamweaver WYSIWYG to lay out pages). So I've got two options - use regular expressions to filter the email, which I'm not that skilled at doing (and would be more vulnerable to exploitation after I leave), turn off register globals and then try to teach the staff what I'm doing, or try to get them to upgrade their versions of PHP and MySQL and/or change web host. I'd appreciate some advice. Thanks for your help, Kevin

    Read the article

  • Benchmarking a UDP server

    - by Nicolas
    I am refactoring a UDP listener from Java to C. It needs to handle between 1000 and 10000 UDP messages per second, with an average data length of around 60 bytes. There is no reply necessary. Data cannot be lost (Don't ask why UDP was decided). I fork off a process to deal with the incoming data so that I can recvfrom as quickly as possible - without filling up my kernel buffers. The child then handles the data received. In short, my algo is: Listen for data. When data is received, check for errors. Fork off a child. If I'm a child, do what I with the data and exit. If I'm a parent, reap any zombie children waitpid(-1, NULL, WNOHANG). Repeat. Firstly, any comments about the above? I'm creating the socket with socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP), binding with AF_INET and INADDR_ANY and recvfrom with no flags. Secondly, can anyone suggest something that I can use to test that this application (or at least the listener) can handle more messages than what I am expecting? Or, would I need to hack something together to do this. I'd guess the latter would be better, so that I can compare data that is generated versus data that is received. But, comments would be appreciated.

    Read the article

  • There's a black hole in my server (TcpClient, TcpListener)

    - by Matías
    Hi, I'm trying to build a server that will receive files sent by clients over a network. If the client decides to send one file at a time, there's no problem, I get the file as I expected, but if it tries to send more than one I only get the first one. Here's the server code: I'm using one Thread per connected client public void ProcessClients() { while (IsListening) { ClientHandler clientHandler = new ClientHandler(listener.AcceptTcpClient()); Thread thread = new Thread(new ThreadStart(clientHandler.Process)); thread.Start(); } } The following code is part of ClientHandler class public void Process() { while (client.Connected) { using (MemoryStream memStream = new MemoryStream()) { int read; while ((read = client.GetStream().Read(buffer, 0, buffer.Length)) > 0) { memStream.Write(buffer, 0, read); } if (memStream.Length > 0) { Packet receivedPacket = (Packet)Tools.Deserialize(memStream.ToArray()); File.WriteAllBytes(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory), Guid.NewGuid() + receivedPacket.Filename), receivedPacket.Content); } } } } On the first iteration I get the first file sent, but after it I don't get anything. I've tried using a Thread.Sleep(1000) at the end of every iteration without any luck. On the other side I have this code (for clients) . . client.Connect(); foreach (var oneFilename in fileList) client.Upload(oneFilename); client.Disconnect(); . . The method Upload: public void Upload(string filename) { FileInfo fileInfo = new FileInfo(filename); Packet packet = new Packet() { Filename = fileInfo.Name, Content = File.ReadAllBytes(filename) }; byte[] serializedPacket = Tools.Serialize(packet); netStream.Write(serializedPacket, 0, serializedPacket.Length); netStream.Flush(); } netStream (NetworkStream) is opened on Connect method, and closed on Disconnect. Where's the black hole? Can I send multiple objects as I'm trying to do? Thanks for your time.

    Read the article

  • Questions on usages of sizeof

    - by Appu
    Question 1 I have a struct like, struct foo { int a; char c; }; When I say sizeof(foo), i am getting 8 on my machine. As per my understanding, 4 bytes for int, 1 byte for char and 3 bytes for padding. Is that correct? Given a struct like the above, how will I find out how many bytes will be added as padding? Question 2 I am aware that sizeof can be used to calculate the size of an array. Mostly I have seen the usage like (foos is an array of foo) sizeof(foos)/sizeof(*foos) But I found that the following will also give same result. sizeof(foos) / sizeof(foo) Is there any difference in these two? Which one is preffered? Question 3 Consider the following statement. foo foos[] = {10,20,30}; When I do sizeof(foos) / sizeof(*foos), it gives 2. But the array has 3 elements. If I change the statement to foo foos[] = {{10},{20},{30}}; it gives correct result 3. Why is this happening? Any thoughts..

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >