Search Results

Search found 11901 results on 477 pages for 'triple store'.

Page 409/477 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Core Data Predicates with Subclassed NSManagedObjects

    - by coneybeare
    I have an AUDIO class. This audio has a SOUND_A subclass and a SOUND_B subclass. This is all done correctly and is working fine. I have another model, lets call it PLAYLIST_JOIN, and this can contain (in the real world) SOUND_A's and SOUND_B's, so we give it a relationship of AUDIO and PLAYLIST. This all works in the app. The problem I am having now is querying the PLAYLIST_JOIN table with an NSPredicate. What I want to do is find an exact PLAYLIST_JOIN item by giving it 2 keys in the predicate sound_a._sound_a_id = %@ && playlist.playlist_id = %@ and sound_b.sound_b_id = %@ && playlist.playlist_id = %@ The main problem is that because the table does not store sound_a and sound_b, but stored audio, I cannot use this syntax. I do not have the option of reorganizing the sound_a and sound_b to use the same _id attribute name, so how do I do this? Can I pass a method to the predicate? something like this: [audio getID] = %@ && playlist_id = %@

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Compact data structure for storing a large set of integral values

    - by Odrade
    I'm working on an application that needs to pass around large sets of Int32 values. The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,000,000. I expect distribution of ids in any given set to be effectively random over this range. The operations I need on the set are dirt simple: Add a new value Iterate over all of the values. There is a serious concern about the memory usage of these sets, so I'm looking for a data structure that can store the ids more efficiently than a simple List<int>or HashSet<int>. I've looked at BitArray, but that can be wasteful depending on how sparse the ids are. I've also considered a bitwise trie, but I'm unsure how to calculate the space efficiency of that solution for the expected data. A Bloom Filter would be great, if only I could tolerate the false negatives. I would appreciate any suggestions of data structures suitable for this purpose. I'm interested in both out-of-the-box and custom solutions. EDIT: To answer your questions: No, the items don't need to be sorted By "pass around" I mean both pass between methods and serialize and send over the wire. I clearly should have mentioned this. There could be a decent number of these sets in memory at once (~100).

    Read the article

  • Help me plan larger Qt project

    - by Pirate for Profit
    I'm trying to create an automated task management system for our company, because they pay me to waste my time. New users will create a "profile", which will store all the working data (I guess serialize everything into xml rite?). A "profile" will contain many different tasks. Tasks are basically just standard computer janitor crap such as moving around files, reading/writing to databases, pinging servers, etc.). So as you can see, a task has many different jobs they do, and also that tasks should run indefinitely as long as the user somehow generates "jobs" for them. There should also be a way to enable/disable (start/pause) tasks. They say create the UI first so... I figure the best way to represent this is with a list-view widget, that lists all the tasks in the profile. Enabled tasks will be bold, disabled will be then when you double-click a task, a tab in the main view opens with all the settings, output, errors,. You can right click a task in the list-view to enable/disable/etc. So each task will be a closable tab, but when you close it just hides. My question is: should I extend from QAction and QTabWidget so I can easily drop tasks in and out of my list-view and tab bar? I'm thinking some way to make this plugin-based, since a lot of the plugins may share similar settings (like some of the same options, but different infos are input). Also, what's the best way to set up threading for this application?

    Read the article

  • Execute a block of database querys

    - by Nightmare
    I have the following task to complete: In my program a have a block of database querys or questions. I want to execute these questions and wait for the result of all questions or catch an error if one question fails! My Question object looks like this (simplified): public class DbQuestion(String sql) { [...] } [...] //The answer is just a holder for custom data... public void SetAnswer(DbAnswer answer) { //Store the answer in the question and fire a event to the listeners this.OnAnswered(EventArgs.Empty); } [...] public void SetError() { //Signal an Error in this query! this.OnError(EventArgs.Empty); } So every question fired to the database has a listener that waits for the parsed result. Now I want to fire some questions asynchronous to the database (max. 5 or so) and fire an event with the data from all questions or an error if only one question throws one! Which is the best or a good way to accomplish this task? Can I really execute more then one question parallel and stop all my work when one question throws an error? I think I need some inspiration on this... Just a note: I´m working with .NET framework 2.0

    Read the article

  • Jquery find first visible element after horizontal scroll

    - by lolo flores
    I’m new (only two weeks old) in Jquery, so please bear with me. I know that a very similar question was asked some time ago but I do not know how to adapt the answer to my problem. I have a very wide multicolumn layout something like this: | aaaa | bbbb | cccc | … | | aaaa | b | cc | … | | aaa | cccc | ddd | … | The code looks like: <div id="container"> <p>aaaaaaaaaaa</p> <p>bbbbb</p> <p>ccccccccccc</p> <p>dddddddddd</p> ... <p>xxxxxx</p> </div> There is no vertical scrolling and the container width is set in such a way that only two columns are shown. The user scrolls left or right to see the relevant text. What I want is to get the position currently on display, store it (maybe in a cookie) and retrieve it the next time the user opens the page. I think that I need a way of finding out what paragraph is currently the left-top most, but other suggestions are very welcome. Any ideas? btw: this is an internal project, so Mozilla only :-) Thanks Lolo

    Read the article

  • Suggestions on error handling of Win32 C++ code: AtlThrow vs. STL exceptions

    - by EmbeddedProg
    In writing Win32 C++ code, I'd appreciate some hints on how to handle errors of Win32 APIs. In particular, in case of a failure of a Win32 function call (e.g. MapViewOfFile), is it better to: use AtlThrowLastWin32 define a Win32Exception class derived from std::exception, with an added HRESULT data member to store the HRESULT corresponding to value returned by GetLastError? In this latter case, I could use the what() method to return a detailed error string (e.g. "MapViewOfFile call failed in MyClass::DoSomething() method."). What are the pros and cons of 1 vs. 2? Is there any other better option that I am missing? As a side note, if I'd like to localize the component I'm developing, how could I localize the exception what() string? I was thinking of building a table mapping the original English string returned by what() into a Unicode localized error string. Could anyone suggest a better approach? Thanks much for your insights and suggestions.

    Read the article

  • How to read the birthday_date from the Facebook API

    - by Steve
    I have been chasing my tail on this! And it should be so simple!! I have an app in FaceBook that is working fine. However, I need to get the user's birth date. I have successfully got the request for extended permissions, but cannot get the birthday_date out and into a variable/store in database. <?php require_once('facebook.php'); $facebook = new Facebook(array( 'appId' => 'xxxxx', 'secret' => 'yyyyyyy', 'cookie' => true )); if ($facebook->getSession()) { $uid = $facebook->getUser(); $fbme = $facebook->api('/me'); } else { $params = array( 'fbconnect'=>0, 'canvas'=>1, 'req_perms'=>'publish_stream','email','user_location','user_birthday' ); $loginUrl = $facebook->getLoginUrl($params); print "<script type='text/javascript'>top.location.href = '$loginUrl';</script>"; } $session = $facebook->getSession(); $token = $session['access_token']; I would be very grateful if someone could show me the PHP code that reads the extended permissions and places the results into variables. Thanks Steve

    Read the article

  • Syncing Data to Remote Services, Best Practices for Caching?

    - by viatropos
    I want to be able to publish events to Eventbrite, Eventful, and Google Calendar for my Google Apps. Each service has slightly different properties for events... I will be syncing many other things too, such as users with Google Contacts and MailChimp, Documents with Google Docs and some other services, etc... So I'm wondering, what is the recommended way of retrieving the data for the end user so that it's reasonably maintainable and optimized? Here are the things I'm thinking that I'm having trouble with: My App keeps a central database of all the models (Event, Document, User, Form, etc.), and whenever Admin creates an object (e.g. create through Eventbrite or through our Admin panel), we sync them and store a copy in our local database. When User goes to the site /events, App retrieves the events from the database. Read Events from a target feed, such as the Eventbrite or Eventful feed, and scrap the local database. Basically, I'm wondering, if we're storing all of the data on a remote service, do we really need to have a local database copy of the data? When would we need to have a local database, when wouldn't we?

    Read the article

  • Handling RSS Tags with NSXMLParser for iPhone

    - by MartinW
    I've found the following code for parsing through RSS but it does not seem to allow for nested elements: - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName{ NSLog(@"ended element: %@", elementName); if ([elementName isEqualToString:@"item"]) { // save values to an item, then store that item into the array... [item setObject:currentTitle forKey:@"title"]; [item setObject:currentLink forKey:@"link"]; [item setObject:currentSummary forKey:@"summary"]; [item setObject:currentDate forKey:@"date"]; [item setObject:currentImage forKey:@"media:thumbnail"]; The RSS to used is: <item><title>Knife robberies and burglaries up</title> <description>The number of robberies carried out at knife-point has increased sharply and burglaries are also up, latest crime figures indicate</description> <link>http://news.bbc.co.uk/go/rss/-/1/hi/uk/7844455.stm</link> <guid isPermaLink="false">http://news.bbc.co.uk/1/hi/uk/7844455.stm</guid> <pubDate>Thu, 22 Jan 2009 13:02:03 GMT</pubDate><category>UK</category> <media:thumbnail width="66" height="49" url="http://newsimg.bbc.co.uk/media/images/45400000/jpg/_45400861_policegeneric_pa.jpg"/> </item> I need to extract the "url" element from the "media" tag. Thanks Martin

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • jQuery / Loading content into div and changing url's (working but buggy)

    - by Bruno
    This is working, but I'm not being able to set an index.html file on my server root where i can specify the first page to go. It also get very buggy in some situations. Basically it's a common site (menu content) but the idea is to load the content without refreshing the page, defining the div to load the content, and make each page accessible by the url. One of the biggest problems here it's dealing with all url situations that may occur. The ideal would be to have a rel="divToLoadOn" and then pass it on my loadContent() function... so I would like or ideas/solutions for this please. Thanks in advance! //if page comes from URL if(window.location.hash != ''){ var url = window.location.hash; url = '..'+url.substr(1, url.length); loadContent(url); } //if page comes from an internal link $("a:not([target])").click(function(e){ e.preventDefault(); var url = $(this).attr("href"); if(url != '#'){ loadContent($(this).attr("href")); } }); //LOAD CONTENT function loadContent(url){ var contentContainer = $("#content"); //set load animation $(contentContainer).ajaxStart(function() { $(this).html('loading...'); }); $.ajax({ url: url, dataType: "html", success: function(data){ //store data globally so it can be used on complete window.data = data; }, complete: function(){ var content = $(data).find("#content").html(); var contentTitle = $(data).find("title").text(); //change url var parsedUrl = url.substr(2,url.length) window.location.hash = parsedUrl; //change title var titleRegex = /(.*)<\/title/.exec(data); contentTitle = titleRegex[1]; document.title = contentTitle; //renew content $(contentContainer).fadeOut(function(){ $(this).html(content).fadeIn(); }); }); }

    Read the article

  • 2D Histogram in R: Converting from Count to Frequency within a Column

    - by Jac
    Would appreciate help with generating a 2D histogram of frequencies, where frequencies are calculated within a column. My main issue: converting from counts to column based frequency. Here's my starting code: # expected packages library(ggplot2) library(plyr) # generate example data corresponding to expected data input x_data = sample(101:200,10000, replace = TRUE) y_data = sample(1:100,10000, replace = TRUE) my_set = data.frame(x_data,y_data) # define x and y interval cut points x_seq = seq(100,200,10) y_seq = seq(0,100,10) # label samples as belonging within x and y intervals my_set$x_interval = cut(my_set$x_data,x_seq) my_set$y_interval = cut(my_set$y_data,y_seq) # determine count for each x,y block xy_df = ddply(my_set, c("x_interval","y_interval"),"nrow") # still need to convert for use with dplyr # convert from count to frequency based on formula: freq = count/sum(count in given x interval) ################ TRYING TO FIGURE OUT ################# # plot results fig_count <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = nrow)) # count fig_freq <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = freq)) # frequency I would appreciate any help in how to calculate the frequency within a column. Thanks! jac EDIT: I think the solution will require the following steps 1) Calculate and store overall counts for each x-interval factor 2) Divide the individual bin count by its corresponding x-interval factor count to obtain frequency. Not sure how to carry this out though. .

    Read the article

  • Ultra-grand super acts_as_tree rails query

    - by Bloudermilk
    Right now I'm dealing with an issue regarding an intense acts_as_tree MySQL query via rails. The model I am querying is Foo. A Foo can belong to any one City, State or Country. My goal is to query Foos based on their location. My locations table is set up like so: I have a table in my database called locations I use a combination of acts_as_tree and polymorphic associations to store each individual location as either a City, State or Country. (This means that my table consists of the rows id, name, parent_id, type) Let's say for instance, I want to query Foos in the state "California". Beside Foos that directly belong to "California", I should get all Foos that belong every City in "California" like Foos in "Los Angeles" and "San Francisco". Not only that, but I should get any Foos that belong to the Country that "California" is in, "United States". I've tried a few things with associations to no avail. I feel like I'm missing some super-helpful Rails-fu here. Any advice?

    Read the article

  • Are there any less costlier alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Accessing a struct collection property from within another collection

    - by paddyb
    I have a struct that I need to store in a collection. The struct has a property that returns a Dictionary. public struct Item { private IDictionary<string, string> values; public IDictionary<string, string> Values { get { return this.values ?? (this.values = new Dictionary<string, string>()); } } } public class ItemCollection : Collection<Item> {} When testing I've found that if I add the item to the collection and then try to access the dictionary the structs values property is never updated. var collection = new ItemCollection { new Item() }; // pre-loaded with an item collection[0].Values.Add("myKey", "myValue"); Trace.WriteLine(collection[0].Values["myKey"]); // KeyNotFoundException here However if I load up the item first and then add it to a collection the values field is maintained. var collection = new ItemCollection(); var item = new Item(); item.Values.Add("myKey", "myValue"); collection.Add(item); Trace.WriteLine(collection[0].Values["myKey"]); // ok I've already decided that a struct is the wrong option for this type, and when using a class the issue doesn't occur, but I'm curious what's different between the two methods. Can anybody explain what's happening?

    Read the article

  • Android - Take a photo, save it in app drawables and display it in an ImageButton

    - by Andres7X
    I have an Android app with an ImageButton. When user clicks on it, intent launches to show camera activity. When user capture the image, I'd like to save it in drawable folder of the app and display it in the same ImageButton clicked by the user, replacing the previous drawable image. I used the activity posted here: Capture Image from Camera and Display in Activity ...but when I capture an image, activity doesn't return to activity which contains ImageButton. Edit code is: public void manage_shop() { static final int CAMERA_REQUEST = 1888; [...] ImageView photo = (ImageView)findViewById(R.id.getimg); photo.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent camera = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(camera, CAMERA_REQUEST); } }); [...] } And onActivityResult(): protected void onActivityResult(int requestCode, int resultCode, Intent data) { ImageButton getimage = (ImageButton)findViewById(R.id.getimg); if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) { Bitmap getphoto = (Bitmap) data.getExtras().get("data"); getimage.setImageBitmap(getphoto); } } How can I also store the captured image in drawable folder?

    Read the article

  • Strongly typed dynamic Linq sorting

    - by David
    I'm trying to build some code for dynamically sorting a Linq IQueryable<. The obvious way is here, which sorts a list using a string for the field name http://dvanderboom.wordpress.com/2008/12/19/dynamically-composing-linq-orderby-clauses/ However I want one change - compile time checking of field names, and the ability to use refactoring/Find All References to support later maintenance. That means I want to define the fields as f=f.Name, instead of as strings. For my specific use I want to encapsulate some code that would decide which of a list of named "OrderBy" expressions should be used based on user input, without writing different code every time. Here is the gist of what I've written: var list = from m Movies select m; // Get our list var sorter = list.GetSorter(...); // Pass in some global user settings object sorter.AddSort("NAME", m=m.Name); sorter.AddSort("YEAR", m=m.Year).ThenBy(m=m.Year); list = sorter.GetSortedList(); ... public class Sorter ... public static Sorter GetSorter(this IQueryable source, ...) The GetSortedList function determines which of the named sorts to use, which results in a List object, where each FieldData contains the MethodInfo and Type values of the fields passed in AddSort: public SorterItem AddSort(Func field) { MethodInfo ... = field.Method; Type ... = TypeOf(TKey); // Create item, add item to diction, add fields to item's List // The item has the ThenBy method, which just adds another field to the List } I'm not sure if there is a way to store the entire field object in a way that would allow it be returned later (it would be impossible to cast, since it is a generic type) Is there a way I could adapt the sample code, or come up with entirely new code, in order to sort using strongly typed field names after they have been stored in some container and retrieved (losing any generic type casting)

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • optimize output value using a class and public member

    - by wiso
    Suppose you have a function, and you call it a lot of times, every time the function return a big object. I've optimized the problem using a functor that return void, and store the returning value in a public member: #include <vector> const int N = 100; std::vector<double> fun(const std::vector<double> & v, const int n) { std::vector<double> output = v; output[n] *= output[n]; return output; } class F { public: F() : output(N) {}; std::vector<double> output; void operator()(const std::vector<double> & v, const int n) { output = v; output[n] *= n; } }; int main() { std::vector<double> start(N,10.); std::vector<double> end(N); double a; // first solution for (unsigned long int i = 0; i != 10000000; ++i) a = fun(start, 2)[3]; // second solution F f; for (unsigned long int i = 0; i != 10000000; ++i) { f(start, 2); a = f.output[3]; } } Yes, I can use inline or optimize in an other way this problem, but here I want to stress on this problem: with the functor I declare and construct the output variable output only one time, using the function I do that every time it is called. The second solution is two time faster than the first with g++ -O1 or g++ -O2. What do you think about it, is it an ugly optimization?

    Read the article

  • Named pipe stalls threads?

    - by entens
    I am attempting to push updates into a process via a named pipe, but in doing so my process loop now seams to stall on while ((line = sr.ReadLine()) != null). I'm a little mystified as to what might be wrong as this is my first foray into named pipes. void RefreshThread() { using (NamedPipeServerStream pipeStream = new NamedPipeServerStream("processPipe", PipeDirection.In)) { pipeStream.WaitForConnection(); using (StreamReader sr = new StreamReader(pipeStream)) { for (; ; ) { if (StopThread == true) { StopThread = false; return; // exit loop and terminate the thread } // push update for heartbeat int HeartbeatHandle = ItemDictionary["Info.Heartbeat"]; int HeartbeatValue = (int)Config.Items[HeartbeatHandle].Value; Config.Items[HeartbeatHandle].Value = ++HeartbeatValue; SetItemValue(HeartbeatHandle, HeartbeatValue, (short)0xC0, DateTime.Now); string line = null; while ((line = sr.ReadLine()) != null) { // line is in the format: item, value, timestamp string[] parts = line.Split(','); // push update and store value in item cache int handle = ItemDictionary[parts[0]]; object value = parts[1]; Config.Items[handle].Value = int.Parse(value); DateTime timestamp = DateTime.FromBinary(long.Parse(parts[2])); SetItemValue(handle, value, (short)0xC0, timestamp); } Thread.Sleep(500); } } } }

    Read the article

  • What prevents a user from adding controls to an ASP.NET page client side?

    - by Curtis White
    This goes back to my other question which I thought was sufficiently answers but upon reflect am not sure that it was (sorry). Backgrounder: I am generating a form dynamically. I am pulling from the database the controls. I must associate each control with a database ID which is not the user's session id. I do this currently by storing my ID in the ID for the web control with some other stuff to make it unique/clear what I am doing. On the post back, I iterate through all the controls on my web page checking for my special identifier, ie, MyGeneratedTextBox_ID_Unique. This process enables for 2 important steps, identifying the control was one I generated and also getting the ID for this input field. And, all of this works but I'm still concerned about the security of it. I do not see a security issue with showing the actual database ID's in this case, although agree it is not desirable. However, I am concerned of the following possibilities: If a user could add a nefarious control to my collection and use that for a SQL injection attack. More academic, but if a user could somehow store data for fields they do not have access too by changing the id's. I agree this is a "hack" of a way to do it. But my question is, is it a security risk and is there an 'easy' way to do it in a less hack way? I assume that only the controls that are created/instantiated on the page are added to the controls list.. thus all controls must be created server side and thus the security issue is address but just wanted to validate. Thanks again. PS: I could see adding a property for each control and encrypting the viewstate would be a little more secure.

    Read the article

  • How to marshal an object and its content (also objects)

    - by Waldo Spek
    I have a question for which I suspect the answer is a bit complex. At this moment I am programming a DLL (class library) in C#. This DLL uses a 3rd party library and therefore deals with 3rd party objects of which I do not have the source code. Now I am planning to create another DLL, which is going to be used in a later stadium in my application. This second DLL should use the 3rd party objects (with corresponding object states) created by the first DLL. Luckily the 3rd party objects extend the MarshalByRefObject class. I can marshal the objects using System.Runtime.Remoting.Marshal(...). I then serialize the objects using a BinaryFormatter and store the objects as a byte[] array. All goes well. I can deserialize and unmarshal in a the opposite way and end up with my original 3rd party objects...so it appears... Nevertheless, when calling methods on my 3rd party deserialized objects I get object internal exceptions. Normally these methods return other 3rd party objects, but (obviously - I guess) now these objects are missing because they weren't serialized. Now my global question: how would I go about marshalling/serializing all the objects which my 3rd party objects reference...and cascade down the "reference tree" to obtain a full and complete serialized object? Right now my guess is to preprocess: obtain all the objects and build my own custom object and serialize it. But I'm hoping there is some other way...

    Read the article

  • Generic that takes only numeric types (int double etc)?

    - by brandon
    In a program I'm working on, I need to write a function to take any numeric type (int, short, long etc) and shove it in to a byte array at a specific offset. There exists a Bitconverter.GetBytes() method that takes the numeric type and returns it as a byte array, and this method only takes numeric types. So far I have: private void AddToByteArray<T>(byte[] destination, int offset, T toAdd) where T : struct { Buffer.BlockCopy(BitConverter.GetBytes(toAdd), 0, destination, offset, sizeof(toAdd)); } So basically my goal is that, for example, a call to AddToByteArray(array, 3, (short)10) would take 10 and store it in the 4th slot of array. The explicit cast exists because I know exactly how many bytes I want it to take up. There are cases where I would want a number that is small enough to be a short to really take up 4 bytes. On the flip side, there are times when I want an int to be crunched down to just a single byte. I'm doing this to create a custom network packet, if that makes any ideas pop in to your heads. If the where clause of a generic supported something like "where T : int || long || etc" I would be ok. (And no need to explain why they don't support that, the reason is fairly obvious) Any help would be greatly appreciated! Edit: I realize that I could just do a bunch of overloads, one for each type I want to support... but I'm asking this question because I want to avoid precisely that :)

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >