Search Results

Search found 7085 results on 284 pages for 'termine updates'.

Page 257/284 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • How can I ensure my programmatic uploads are done in the correct order?

    - by ccomet
    In our application, we store two copies of a file - an approved one and an unapproved one. Both track their versions separately. When the unapproved is then approved, all of its versions are added as new versions to the approved file. To do this properly, my code has to upload each version separately into the approved folder, and update the item each time with that version's information. For some reason, though, this doesn't always work properly. In my latest scenario, the latest version was uploaded first, and then all of the remaining versions were uploaded afterwards. However, my code explicitly is supposed to upload the other versions first, that's the order I wrote it in. Why is this happening? And if it is possible, how do I ensure that the versions are uploaded in the correct order? Clarification - It's not a problem with the enumeration - I'm getting the previous versions in the correct order. What is happening is that the final version, which is written after the loop, is being uploaded before the loop. Which really doesn't make any sense to me. Here's a condensed version of the relevant code. //These three are initialized earlier in the code. SPList list; //The document library SPListItem item; //The list item in the Unapproved folder int AID; //The item id of the corresponding item in the Approved folder. byte[] contents; //Not initialized. /* These uploads are happening second when they should happen first. */ if (item.File.Versions.Count > 0) { //This loop is actually a separate method call if that matters. //For simplicity I expanded it here. foreach (SPFileVersion fVer in item.File.Versions) { if (!fVer.IsCurrentVersion) { contents = fVer.OpenBinary(); SPFile fSub = aFolder.Files.Add(fVer.File.Name, contents, u1, fVer.CreatedBy, dt1, fVer.Created); SPListItem subItem = list.GetItemById(AID); //This method updates the newly uploaded version with the field data of that version. UpdateFields(item.Versions.GetVersionFromLabel(fVer.VersionLabel), subItem); } } } /* This upload happens first when it should happen last. */ //Does the same as earlier loop, but for the final version. contents = item.File.OpenBinary(); SPFile f = aFolder.Files.Add(item.File.Name, contents, u1, u2, dt1, dt2); SPListItem finalItem = list.GetItemById(AID); UpdateFields(item.Versions[0], finalItem); item.Delete();

    Read the article

  • Can you find a pattern to sync files knowing only dates and filenames?

    - by Robert MacLean
    Imagine if you will a operating system that had the following methods for files Create File: Creates (writes) a new file to disk. Calling this if a file exists causes a fault. Update File: Updates an existing file. Call this if a file doesn't exist causes a fault. Read File: Reads data from a file. Enumerate files: Gets all files in a folder. Files themselves in this operating system only have the following meta data: Created Time: The original date and time the file was created, by the Create File method. Modified Time: The date and time the file was last modified by the Update File method. If the file has never been modified, this will equal the Create Time. You have been given the task of writing an application which will sync the files between two directories (lets call them bill and ted) on a machine. However it is not that simple, the client has required that The application never faults (see methods above). That while the application is running the users can add and update files and those will be sync'd next time the application runs. Files can be added to either the ted or bill directories. File names cannot be altered. The application will perform one sync per time it is run. The application must be almost entirely in memory, in other words you cannot create a log of filenames and write that to disk and then check that the next time. The exception to point 6 is that you can store date and times between runs. Each date/time is associated with a key labeled A through J (so you have 10 to use) so you can compare keys between runs. There is no way to catch exceptions in the application. Answer will be accepted based on the following conditions: First answer to meet all requirements will be accepted. If there is no way to meet all requirements, the answer which ensures the smallest amount of missed changes per sync will be accepted. A bounty will be created (100 points) as soon as possible for the prize. The winner will be selected one day before the bounty ends. Please ask questions in the comments and I will gladly update and refine the question on those.

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • synchronizing XML nodes between class and file using C#

    - by Sarah Vessels
    I'm trying to write an IXmlSerializable class that stays synced with an XML file. The XML file has the following format: <?xml version="1.0" encoding="utf-8" ?> <configuration> <logging> <logLevel>Error</logLevel> </logging> ...potentially other sections... </configuration> I have a DllConfig class for the whole XML file and a LoggingSection class for representing <logging> and its contents, i.e., <logLevel>. DllConfig has this property: [XmlElement(ElementName = LOGGING_TAG_NAME, DataType = "LoggingSection")] public LoggingSection Logging { get; protected set; } What I want is for the backing XML file to be updated (i.e., rewritten) when a property is set. I already have DllConfig do this when Logging is set. However, how should I go about doing this when Logging.LogLevel is set? Here's an example of what I mean: var config = new DllConfig("path_to_backing_file.xml"); config.Logging.LogLevel = LogLevel.Information; // not using Logging setter, but a // setter on LoggingSection, so how // does path_to_backing_file.xml // have its contents updated? My current solution is to have a SyncedLoggingSection class that inherits from LoggingSection and also takes a DllConfig instance in the constructor. It declares a new LogLevel that, when set, updates the LogLevel in the base class and also uses the given DllConfig to write the entire DllConfig out to the backing XML file. Is this a good technique? I don't think I can just serialize SyncedLoggingSection by itself to the backing XML file, because not all of the contents will be written, just the <logging> node. Then I'd end up with an XML file containing only the <logging> section with its updated <logLevel>, instead of the entire config file with <logLevel> updated. Hence, I need to pass an instance of DllConfig to SyncedLoggingSection. It seems almost like I want an event handler, one in DllConfig that would notice when particular properties (i.e., LogLevel) in its properties (i.e., Logging) were set. Is such a thing possible?

    Read the article

  • Segmentation fault in std function std::_Rb_tree_rebalance_for_erase ()

    - by Sarah
    I'm somewhat new to programming and am unsure how to deal with a segmentation fault that appears to be coming from a std function. I hope I'm doing something stupid (i.e., misusing a container), because I have no idea how to fix it. The precise error is Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x000000000000000c 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () (gdb) backtrace #0 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () #1 0x000000010000e593 in Simulation::runEpidSim (this=0x7fff5fbfcb20) at stl_tree.h:1263 #2 0x0000000100016078 in main () at main.cpp:43 The function that exits successfully just before the segmentation fault updates the contents of two containers. One is a boost::unordered_multimap called carriage; it contains one or more struct Infection objects that contain two doubles. The other container is of type std::multiset< Event, std::less< Event EventPQ called ce. It is full of Event structs. void Host::recover( int s, double recoverTime, EventPQ & ce ) { // Clearing all serotypes in carriage // and their associated recovery events in ce // and then updating susceptibility to each serotype double oldRecTime; int z; for ( InfectionMap::iterator itr = carriage.begin(); itr != carriage.end(); itr++ ) { z = itr->first; oldRecTime = (itr->second).recT; EventPQ::iterator epqItr = ce.find( Event(oldRecTime) ); assert( epqItr != ce.end() ); ce.erase( epqItr ); immune[ z ]++; } carriage.clear(); calcSusc(); // a function that edits an array cout << "Done with sync_recovery event." << endl; } The last cout << line appears immediately before the seg fault. I hope this is enough (but not too much) information. My idea so far is that the rebalancing is being attempting on ce after this function, but I am unsure why it would be failing. (It's unfortunately very hard for me to test this code by removing particular lines, since they would create logical inconsistencies and further problems, but if experienced programmers still think this is the way to go, I'll try.)

    Read the article

  • Core-Data + AFNetworking + UI Updating (Responsiveness)

    - by Mustafa
    Here's the scenario: I'm writing a DownloadManager, that allows the user to download, pause, cancel, download all, and pause all. The DownloadManager is a singleton, and uses AFNetworking to download files. It has it's own private managed object context, so that user can freely use other parts of the application (by adding, editing, deleting) core-data objects. I have a core-data entity DownloadInfo that stores the download information i.e. fileURL, fileSize, bytesRead, etc. The DownloadManager updates the download progress in DownloadInfo (one for each file). I have a DownloadManagerViewController which uses NSFetchedResultsController to show the download status to the user. This download view controller is using the main managed object context. Now let's say that I have 20 files in the download queue. And let's say that only 3 concurrent downloads are allowed. The download manager should download the file, and show the download progress. Problem: The DownloadInfo objects are being updated by the DownloadManager at a very high rate. The DownloadManagerViewController (responsible for showing the download progress) is updating the list using NSFetchedResultsControllerDelegate methods. The result is that a lot is happening in the main queue and application has very poor responsiveness. How can I fix this? How can I make the application responsive, while showing the download progress? I don't know how else to communicate that the download status between DownloadManager and DownloadManagerViewController. Is there another/ a better way to do this? I don't want to use main managed object context in my DownloadManager, for reasons mentioned above. Note, that the DownloadManager is using AFNetworking which is handling the requests asynchronously, but eventually the DownloadInfo objects are updated in the main thread (as a result of the callback methods). Maybe there's a way to handle the downloads and status update operations in a background thread? but how? How will I communicate between the main thread and the background thread i.e. how will I tell the background thread to queue another file for download? Thanks.

    Read the article

  • Query Results Not Expected

    - by E-Madd
    I've been a CF developer for 15 years and I've never run into anything this strange or frustrating. I've pulled my hair out for hours, googled, abstracted, simplified, prayed and done it all in reverse. Can you help me? A cffunction takes one string argument and from that string I build an array of "phrases" to run a query with, attempting to match a location name in my database. For example, the string "the republic of boulder" would produce the array: ["the","republic","of","boulder","the republic","the republic of","the republic of boulder","republic of","republic of boulder","of boulder"]. Another cffunction uses the aforementioned cffunction and runs a cfquery. A query based on the previously given example would be... select locationid, locationname, locationaliasname from vwLocationsWithAlias where LocationName in ('the','the republic','the republic of','republic','republic of','republic of boulder','of','of boulder','boulder') or LocationAliasName in ('the','the republic','the republic of','republic','republic of','republic of boulder','of','of boulder','boulder') This returns 2 records... locationid - locationname - locationalias 99 - 'Boulder' - 'the republic' 68 - 'Boulder' - NULL This is good. Works fine and dandy. HOWEVER... if the string is changed to "the republic", resulting in the phrases array ["the","republic","the republic"] which is then used to produce the query... select locationid, locationname, locationaliasname from vwLocationsWithAlias where LocationName in ('the','the republic','republic') or LocationAliasName in ('the','the republic','republic') This returns 0 records. Say what?! OK, just to make sure I'm not involuntarily HIGH I run that very same query in my SQL console against the same database in the cf datasource. 1 RECORD! locationid - locationname - locationalias 99 - 'Boulder' - 'the republic' I can even hard-code that sql within the same cffunction and get that one result, but never from the dynamically generated SQL. I can get my location phrases from another cffunction of a different name that returns hard-coded array values and those work, but never if the array is dynamically built. I've tried removing cfqueryparams, triple-checking my datatypes, datasource setups, etc., etc., etc. NO DICE WTF!? Is this an obscure bug? Am I losing my mind? I've tried everything I can think of and others (including Ray Camden) can think of. ColdFusion 8 (with all the latest hotfixes) SQL Server 2005 (with all the greatest service packs) Windows 2003 Server (with all the latest updates, service packs and nightly MS voodoo)

    Read the article

  • GUI Application becomes unresponsive while http request is being done

    - by JW
    I have just made my first proper little desktop GUI application that basically wraps a GUI (php-gtk) interface around a SimpleTest Web Test case to make it act as a remote testing client. Each time the local The Web Test case runs, it sends an HTTP request to another SimpleTest case (that has an XHTML interface) sitting on my server. The application, allows me to run one local test that collates information from multiple remote tests. It just has a 'Start Test' button, 'Stop Test' button and a setting to increase/decrease the number of remote tests conducted in each HTTP Request. Each test-run takes about an hour to complete. The trouble is, most of the time the application is making http requests. Furthermore, whenever an HTTP Request is being made, the application's GUI is unresponsive. I have taken to making the application wait a few seconds (iterating through the Gtk::main_iteration ) between requests in order to give the user time to re-size the window, press the Stop button, etc. But, this makes the whole test run take a lot a longer than is necessary. <?php require_once('simpletest/web_tester.php'); class TestRemoteTestingClient extends WebTestCase { function testRunIterations() { ... $this->assertTrue($this->get($nextUrl), 'getting from pointer:'. $this->_remoteMementoPointer); $this->assertResponse(200, "checking response for " . $nextUrl ); $this->assertText('RemoteNodeGreen'); $this->doGtkIterationsForMinNSeconds($secs); ... } public function doGtkIterationsForMinNSeconds($secs) { $this->appendStatusMessage("Waiting " . $secs); $start = time(); $end = $start + $secs; while( (time() < $end) ) { $this->appendStatusMessage("Waiting " . ($end - time())); while(gtk::events_pending()) Gtk::main_iteration(); } } } Is there a way to keep the application responsive whilst, at the same time making an HTTP request? I am considering splitting the application into two, where: Test Controller Application - Acts as a settings-writer / report-reader and this writes to settings file and reads a report file. Test Runner Application - Acts as a settings-reader / report-writer and, for each iteration reads the settings file, Runs the test, then write a report. So to tell it to close down - I'd: Press the Stop Button on the 'Test Controller Application', which writes to the settings file, which is read by the 'Test Runner Application' which stops, then writes to the report file to say it stopped the 'Test Controller Application' reads the report and updates the status and so on... However, before I go ahead and split the application in two - I am wondering if there is any other obvious way to deal with, this issue. I suspect it is probably quite common and a well-trodden path. Also is there an easier way to send messages between two applications?

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • Long Running Stored Proc - Report Progress Using BackgroundWorker & Timer

    - by daveywc
    While a long running stored proc (RMR_Seek) is executing (called via a Linq-To-SQL data context) I am trying to call another stored proc (RMR_GetLatestModelMessage) to check a table for the latest status message. The long running stored proc updates the table in question with status messages as it executes. I want to display the status message on a message panel to advise the user of the status of the execution of Proc_A. For various reasons it is not possible to determine how long RMR_Seek will take to execute so a progress bar with percentage increments is not feasible. I thought I'd found the way to do it by calling the long running stored proc from in a BackgroundWorker process DoWork event handler. This worked fine and allowed me to update my message panel with some dummy status messages that were NOT obtained via Proc_B while Proc_A was running. However now that I have tried to implement this fully by calling Proc_B to obtain the status messages I am running into problems that seem to be related to the mix of the backgroundworker and my System.Windows.Forms.Timer. An extract of the code I am using is below. I have tried many different ways around this but each one seems to present its own set of problems. The code below is problematic in the bw_DoWork event. The RMR_Seek stored proc gets called but does not execute properly - it also seems to be inconsistent as to whether _IsCompleted gets set to true. I'm sure there is a better way to achieve what I am trying to do. private bool _IsCompleted; private void RunRevenueSeek() { if (_SelectedModel == null) { MessageBox.Show("Please select a model from the list and try again.", "Model Generation", MessageBoxButtons.OK, MessageBoxIcon.Information); } else { var bw = new BackgroundWorker(); bw.DoWork += new DoWorkEventHandler(bw_DoWork); ProgressPanelControl.Visible = true; _IsCompleted = false; MessageTimer.Start(); // Has an interval of 3000 bw.RunWorkerAsync(); ProgressLabelControl.Text = "Refreshing Data"; this.Update(); ...more code goes here } } private void bw_DoWork(object sender, DoWorkEventArgs e) { using (var dc = new RevMdlrDataClassesDataContext()) { dc.CommandTimeout = 300; dc.RMR_Seek(_SelectedModel.ModelSet_ID); _IsCompleted = true; } } private void MessageTimer_Tick(object sender, EventArgs e) { string message = ""; if (_IsCompleted) { MessageTimer.Stop(); } else { using (var dc = new RevMdlrDataClassesDataContext()) { dc.CommandTimeout = 300; dc.RMR_GetLatestModelMessage(_SelectedModel.ModelSet_ID, ref message); ProgressLabelControl.Text = message; this.Update(); } } }

    Read the article

  • Cancel outlook meeting requests via MailMessage in C#

    - by BTmuney
    I'm creating an application using the ASP.NET MVC 1 framework in C#, where I have users that register for events. Upon registering, I create an outlook meeting request public string BuildMeetingRequest(DateTime start, DateTime end, string attendees, string organizer, string subject, string description, string UID, string location) { System.Text.StringBuilder sw = new System.Text.StringBuilder(); sw.AppendLine("BEGIN:VCALENDAR"); sw.AppendLine("VERSION:2.0"); sw.AppendLine("METHOD:REQUEST"); sw.AppendLine("BEGIN:VEVENT"); sw.AppendLine(attendees); sw.AppendLine("CLASS:PUBLIC"); sw.AppendLine(string.Format("CREATED:{0:yyyyMMddTHHmmssZ}", DateTime.UtcNow)); sw.AppendLine("DESCRIPTION:" + description); sw.AppendLine(string.Format("DTEND:{0:yyyyMMddTHHmmssZ}", end)); sw.AppendLine(string.Format("DTSTAMP:{0:yyyyMMddTHHmmssZ}", DateTime.UtcNow)); sw.AppendLine(string.Format("DTSTART:{0:yyyyMMddTHHmmssZ}", start)); sw.AppendLine("ORGANIZER;CN=\"NAME\":mailto:" + organizer); sw.AppendLine("SEQUENCE:0"); sw.AppendLine("UID:" + UID); sw.AppendLine("LOCATION:" + location); sw.AppendLine("SUMMARY;LANGUAGE=en-us:" + subject); sw.AppendLine("BEGIN:VALARM"); sw.AppendLine("TRIGGER:-PT720M"); sw.AppendLine("ACTION:DISPLAY"); sw.AppendLine("DESCRIPTION:Reminder"); sw.AppendLine("END:VALARM"); sw.AppendLine("END:VEVENT"); sw.AppendLine("END:VCALENDAR"); return sw.ToString(); } And once built, I use MailMessage, with an alternate view to send out the meeting request: meetingInfo = BuildMeetingRequest(start, end, attendees, organizer, subject, description, UID, location); System.Net.Mime.ContentType mimeType = new System.Net.Mime.ContentType("text/calendar; method=REQUEST"); AlternateView ICSview = AlternateView.CreateAlternateViewFromString(meetingInfo,mimeType); MailMessage message = new MailMessage(); message.To.Add(to); message.From = new MailAddress(from); message.AlternateViews.Add(ICSview); SmtpClient client = new SmtpClient(); client.Send(message); When users get the email in outlook, it shows up as a meeting request, as opposed to a normal email. This works well for sending out updates to the meeting request as well. The only problem that I am having is that I do not know the proper format for sending out a cancellation. I've attempted to examine some meeting request cancellations in text editors and can't seem to pinpoint the difference in the format between cancelling/creating. Any help on this is greatly appreciated.

    Read the article

  • How can I test caching and cache busting?

    - by Nathan Long
    In PHP, I'm trying to steal a page from the Rails playbook (see 'Using Asset Timestamps' here): By default, Rails appends assets' timestamps to all asset paths. This allows you to set a cache-expiration date for the asset far into the future, but still be able to instantly invalidate it by simply updating the file (and hence updating the timestamp, which then updates the URL as the timestamp is part of that, which in turn busts the cache). It‘s the responsibility of the web server you use to set the far-future expiration date on cache assets that you need to take advantage of this feature. Here‘s an example for Apache: # Asset Expiration ExpiresActive On <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresDefault "access plus 1 year" </FilesMatch> If you look at a the source for a Rails page, you'll see what they mean: the path to a stylesheet might be "/stylesheets/scaffold.css?1268228124", where the numbers at the end are the timestamp when the file was last updated. So it should work like this: The browser says 'give me this page' The server says 'here, and by the way, this stylesheet called scaffold.css?1268228124 can be cached for a year - it's not gonna change.' On reloads, the browser says 'I'm not asking for that css file, because my local copy is still good.' A month later, you edit and save the file, which changes the timestamp, which means that the file is no longer called scaffold.css?1268228124 because the numbers change. When the browser sees that, it says 'I've never seen that file! Give me a copy, please.' The cache is 'busted.' I think that's brilliant. So I wrote a function that spits out stylesheet and javascript tags with timestamps appended to the file names, and I configured Apache with the statement above. Now: how do I tell if the caching and cache busting are working? I'm checking my pages with two plugins for Firebug: Yslow and Google Page Speed. Both seem to say that my files are caching: "Add expires headers" in Yslow and "leverage browser caching" in Page Speed are both checked. But when I look at the Page Speed Activity, I see a lot of requests and waiting and no 'cache hits'. If I change my stylesheet and reload, I do see the change immediately. But I don't know if that's because the browser never cached in the first place or because the cache is busted. How can I tell?

    Read the article

  • IDs necessary in update script not being stored (or even seen!?) (PHP MySQL)

    - by Derek
    Hi guys, I really need help with this one...have spent 3 hours trying to figure it out... Basically, I have 3 tables necessary for this function to work (the query and PHP)... Authors, Books and Users. An author can have many books, and a user can have many books - that's it. When the admin user selects to update a book, they are presented with a form, displaying the current data within the fields, very straight forward... However there is one tricky part, the admin user can change the author for a book (incase they make a mistake) and also change the user for which the book is associated with. When I select to update the single book information I am not getting any values what so ever for author_id or user_id. Meaning that when the user updates the book info, the associations with the user and author is being scrapped altogether (when before there was an association)... I cannot see why this is happening because I can clearly see the IDs for the users and authors for my option values (this is because they are in select dropdowns). Here is what my sql to retrieve the user ID is: SELECT user_id, name FROM users and then i have my select options which brings up all the users in the system: <label>This book belongs to:</label> <select name="name" id="name"> <option value="<?php echo $row['user_id']?>" SELECTED><?php echo $row['name']?> - Current</option> <?php while($row = mysql_fetch_array($result)) { ?> <option value="<?php echo $row['user_id']; if (isset($_POST['user_id']));?>"><?php echo $row['name']?></option> <?php } ?> In the presented HTML form, I can select the users (by name) and within the source code I can see the IDs (for the value) matching against the names of the users. Finally, in my script that performs the update, I have this: $book_id = $_POST['book_id']; $bookname = $_POST['bookname']; $booklevel = $_POST['booklevel']; $author_id = $_POST['author_id']; $user_id = $_POST['user_id']; $sql = "UPDATE books SET bookname= '".$bookname."', booklevel= '".$booklevel."', author_id='".$author_id."', user_id= '".$user_id."' WHERE book_id = ".$book_id; The result of this query returns no value for either author_id or user_id... Obviously in this question I have given the information for the user stuff (with the HTML being displayed) but im guessing that I have the same problem with authors aswell... How can I get these ID's passed to the script so that the change can be acknowledge!! :(

    Read the article

  • Written a resharper plugin that modifies the TextControl?

    - by Ajaxx
    Resharper claims to eat it's own dogfood, specifically, they claim that many of the features of Resharper are written ontop of R# (OpenAPI). I'm writing a simple plugin to fix up comments of the current document of a selection. When this plugin is run, it throws an exception as follows: Document can be modified inside a command scope only I've researched the error and can't find anything to help with this, so I'm hoping that possibly, you've written a plugin to accomplish this. If not, I hope the snippet is enough to help others get their own plugins underway. using System; using System.IO; using System.Windows.Forms; using JetBrains.ActionManagement; using JetBrains.DocumentModel; using JetBrains.IDE; using JetBrains.TextControl; using JetBrains.Util; namespace TinkerToys.Actions { [ActionHandler("TinkerToys.RewriteComment")] public class RewriteCommentAction : IActionHandler { #region Implementation of IActionHandler /// <summary> /// Updates action visual presentation. If presentation.Enabled is set to false, Execute /// will not be called. /// </summary> /// <param name="context">DataContext</param> /// <param name="presentation">presentation to update</param> /// <param name="nextUpdate">delegate to call</param> public bool Update(IDataContext context, ActionPresentation presentation, DelegateUpdate nextUpdate) { ITextControl textControl = context.GetData(DataConstants.TEXT_CONTROL); return textControl != null; } /// <summary> /// Executes action. Called after Update, that set ActionPresentation.Enabled to true. /// </summary> /// <param name="context">DataContext</param> /// <param name="nextExecute">delegate to call</param> public void Execute(IDataContext context, DelegateExecute nextExecute) { ITextControl textControl = context.GetData(DataConstants.TEXT_CONTROL); if (textControl != null) { TextRange textSelectRange; ISelectionModel textSelectionModel = textControl.SelectionModel; if ((textSelectionModel != null) && textSelectionModel.HasSelection()) { textSelectRange = textSelectionModel.Range; } else { textSelectRange = new TextRange(0, textControl.Document.GetTextLength()); } IDocument textDocument = textControl.Document; String textSelection = textDocument.GetText(textSelectRange); if (textSelection != null) { StringReader sReader = new StringReader(textSelection); StringWriter sWriter = new StringWriter(); Converter.Convert(sReader, sWriter); textSelection = sWriter.ToString(); textDocument.ReplaceText(textSelectRange, textSelection); } } } #endregion } } So what is this command scope it wants so badly? I had some additional logging in this prior to posting it so I'm absolutely certain that both the range and text are valid. In addition, the error seems to indicate that I'm missing some scope that I've been, as yet, unable to find.

    Read the article

  • Advantage of creating a generic repository vs. specific repository for each object?

    - by LuckyLindy
    We are developing an ASP.NET MVC application, and are now building the repository/service classes. I'm wondering if there are any major advantages to creating a generic IRepository interface that all repositories implement, vs. each Repository having its own unique interface and set of methods. For example: a generic IRepository interface might look like (taken from this answer): public interface IRepository : IDisposable { T[] GetAll<T>(); T[] GetAll<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter, List<Expression<Func<T, object>>> subSelectors); void Delete<T>(T entity); void Add<T>(T entity); int SaveChanges(); DbTransaction BeginTransaction(); } Each Repository would implement this interface (e.g. CustomerRepository:IRepository, ProductRepository:IRepository, etc). The alternate that we've followed in prior projects would be: public interface IInvoiceRepository : IDisposable { EntityCollection<InvoiceEntity> GetAllInvoices(int accountId); EntityCollection<InvoiceEntity> GetAllInvoices(DateTime theDate); InvoiceEntity GetSingleInvoice(int id, bool doFetchRelated); InvoiceEntity GetSingleInvoice(DateTime invoiceDate, int accountId); //unique InvoiceEntity CreateInvoice(); InvoiceLineEntity CreateInvoiceLine(); void SaveChanges(InvoiceEntity); //handles inserts or updates void DeleteInvoice(InvoiceEntity); void DeleteInvoiceLine(InvoiceLineEntity); } In the second case, the expressions (LINQ or otherwise) would be entirely contained in the Repository implementation, whoever is implementing the service just needs to know which repository function to call. I guess I don't see the advantage of writing all the expression syntax in the service class and passing to the repository. Wouldn't this mean easy-to-messup LINQ code is being duplicated in many cases? For example, in our old invoicing system, we call InvoiceRepository.GetSingleInvoice(DateTime invoiceDate, int accountId) from a few different services (Customer, Invoice, Account, etc). That seems much cleaner than writing the following in multiple places: rep.GetSingle(x => x.AccountId = someId && x.InvoiceDate = someDate.Date); The only disadvantage I see to using the specific approach is that we could end up with many permutations of Get* functions, but this still seems preferable to pushing the expression logic up into the Service classes. What am I missing?

    Read the article

  • UpdatePanel works great in all browsers except IE8

    - by Phil
    I'm using a timer triggered update panel which works perfectly under the latest versions of Firefox, Opera, and Safari. However, in IE, it sometimes works fine (just the panel updates without flicker elsewhere), but often the whole page starts to flicker after a while. I'm using a control to encapsulate this logic and the control is used on all of the pages on my web site. Here's my code. Any suggestions? and Thank you. <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="QuoteControl.ascx.cs" Inherits="occinc.QuoteControl" %> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="True" AjaxFrameworkMode="Enabled"/> <asp:UpdatePanel UpdateMode="Conditional" runat="server" ID="UpdatePanel1" > <ContentTemplate> <div id="content-side2-three-column"> <%--Data Source Controls--%> <asp:ObjectDataSource ID="ObjectDataSourceQuotes" runat="server" SelectMethod="GetAllQuotes" TypeName="Quotes"></asp:ObjectDataSource> <asp:GridView ID="GridViewQuotes" runat="server" AutoGenerateColumns="False" DataSourceID="ObjectDataSourceQuotes" BorderWidth="0" BorderColor="white" OnRowCreated="GridViewAllQuotes_RowCreated"> <Columns> <asp:TemplateField> <ItemTemplate> <h2 id="<%# Eval("id").ToString() %>"> <%# Eval("Author") %> </h2> <q lang="en-us"><%# Eval("Content")%></q> <br /> <hr /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </div> </ContentTemplate> <Triggers><asp:AsyncPostBackTrigger ControlID="Timer1" EventName="Tick" /></Triggers> </asp:UpdatePanel> <asp:Timer ID="Timer1" runat="server" Interval="5000" /> <div class="clear"> </div>

    Read the article

  • Using WSH (VBS) with iMacros - how do they do it?

    - by Carl
    (iMacros For Firefox 6.6.5.0; Firefox 3.6.3; Windows XP Pro SP3 w/all updates) I made an iMacro to select "load next 25" (comments) on a web page (CNN.COM). Unfortunately, iMacros doesn't appear to do looping (do the above until that string doesn't appear on the page anymore - i.e. all the comments are loaded). I tried putting {!iloop} in the TAG command, and it didn't work - then I read it wouldn't. So I tried the example at http://wiki.imacros.net/Loop_after_Query_or_Login I can't find any information on how to actually run the script in the above example. I searched Google and found VBS scripting is handled with .wsh files with Windows XP Pro. (The examples and other references there say Windows does VBS natively, so I looked up how with Google.) So I made the following .wsh file (modified the above example): Option Explicit Dim iim1, iret 'initialize iMacros instance set iim1 = CreateObject ("imacros") iret = iim1.iimInit() do while not iret < 0 iret = iim1.iimPlay("Load All CNN Comments") loop ' tell user we're done msgbox "End." ' exit iMacros instance and quit script iret = iim1.iimExit() Wscript.Quit() Here's the iMacro: (Load All CNN Comments.iim) VERSION BUILD=6650406 RECORDER=FX TAG POS=1 TYPE=A ATTR=TXT:Load<SP>next<SP>25 WAIT SECONDS=#DOWNLOADCOMPLETE# The iMacro works by itself - I press Play (left iMacro panel) and the next 25 comments load on the CNN.com page in the current tab. I put the .wsh file in the ...\iMacros\Macros directory - with the iMacro "Load All CNN Comments.iim" When I run the .wsh file (by just double clicking on it's icon - I created it with Notepad, and Windows gave it an icon for that file type - it's executable) I get the message from "Windows Script Host" - "There is no script file specified." I wasn't actually expecting it to work, as I don't see how Windows would know to call iMacros to run the iim macro. It would be nice if there was a simple, COMPLETE, example of how to use a VBS script with iMacros, that isn't bogged down with unnecessary complication like filling in a form, loading multiple pages, etc. I can't find ANY example. So what do I need to do to get this to work? I just installed iMacros yesterday, because I am constantly having the problem that there are hundred of comments after a CNN.com article, and loading 25 more at a time until they are all on the page makes it impractical to read any replies to my comments. It would also be nice if I could run the Macro from Firefox, rather than by double clicking on some file somewhere. Thanks for any help.

    Read the article

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • stop and split generated sequence at repeats - clojure

    - by fitzsnaggle
    I am trying to make a sequence that will only generate values until it finds the following conditions and return the listed results: case head = 0 - return {:origin [all generated except 0] :pattern 0} 1 - return {:origin nil :pattern [all-generated-values] } repeated-value - {:origin [values-before-repeat] :pattern [values-after-repeat] { ; n = int ; x = int ; hist - all generated values ; Keeps the head below x (defn trim-head [head x] (loop [head head] (if (> head x) (recur (- head x)) head))) ; Generates the next head (defn next-head [head x n] (trim-head (* head n) x)) (defn row [x n] (iterate #(next-head % x n) n)) ; Generates a whole row - ; Rows are a max of x - 1. (take (- x 1) (row 11 3)) Examples of cases to stop before reaching end of row: [9 8 4 5 6 7 4] - '4' is repeated so STOP. Return preceding as origin and rest as pattern. {:origin [9 8] :pattern [4 5 6 7]} [4 5 6 1] - found a '1' so STOP, so return everything as pattern {:origin nil :pattern [4 5 6 1]} [3 0] - found a '0' so STOP {:origin [3] :pattern [0]} :else if the sequences reaches a length of x - 1: {:origin [all values generated] :pattern nil} The Problem I have used partition-by with some success to split the groups at the point where a repeated value is found, but would like to do this lazily. Is there some way I can use take-while, or condp, or the :while clause of the for loop to make a condition that partitions when it finds repeats? Some Attempts (take 2 (partition-by #(= 1 %) (row 11 4))) (for [p (partition-by #(stop-match? %) head) (iterate #(next-head % x n) n) :while (or (not= (last p) (or 1 0 n) (nil? (rest p))] {:origin (first p) :pattern (concat (second p) (last p))})) # Updates What I really want to be able to do is find out if a value has repeated and partition the seq without using the index. Is that possible? Something like this - { (defn row [x n] (loop [hist [n] head (gen-next-head (first hist) x n) steps 1] (if (>= (- x 1) steps) (case head 0 {:origin [hist] :pattern [0]} 1 {:origin nil :pattern (conj hist head)} ; Speculative from here on out (let [p (partition-by #(apply distinct? %) (conj hist head))] (if-not (nil? (next p)) ; One partition if no repeats. {:origin (first p) :pattern (concat (second p) (nth 3 p))} (recur (conj hist head) (gen-next-head head x n) (inc steps))))) {:origin hist :pattern nil}))) }

    Read the article

  • Various GPS Android Functionality Questions..

    - by Tyler
    Hello - I have a few questions (so far) with the the LocationManager on Android and GPS in general.. Feel free to answer any number of the questions below, and I appreciate your help in advance! (I noticed this stuff doesn't appear to be documented very well, so hopefully these questions will help others out too!) 1) I am using the following code, but I think there may be extra fluff in here that I do not need. Can you tell me if I can delete any of this? LocationManager lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); LocationListener locationListener = new MyLocationListener(); lm.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, locationListener); LocationProvider locationProvider = lm.getProvider("gps"); Location currentLocation = lm.getLastKnownLocation(locationProvider.getName()); 2) Is there a way to hold off on the last step (accessing "getLastKnownLocation" until after I am sure I have a GPS lock? What happens if this is called and GPS is still looking for signal? 3) MOST importantly, I want to ensure I have a GPS lock before I proceed to my next method, so is there a way to check to see if GPS is locked on and getLastKnownLocation is up to date? 4) Is there a way to 'shut down' the GPS listener once it does receive a lock and getLastKnownLocation is updated? I don't see a need to keep this running for my application once I have obtained a lock.. 5) Can you please confirm my assumption that "getLastKnownLocation" is updated frequently as the receiver moves? 6) In my code, I also have a class called "MyLocationListener" (code below) that I honestly just took from another example.. Is this actually needed? I assume this updates my location manager whenever the location changes, but it sure doesn't appear that there is much to the class itself! private class MyLocationListener implements LocationListener { @Override public void onLocationChanged(Location loc) { if (loc != null) { //Toast.makeText(getBaseContext(), "Location changed : Lat: " + loc.getLatitude() + " Lng: " + loc.getLongitude(), Toast.LENGTH_SHORT).show(); } } @Override public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } @Override public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } @Override public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } }

    Read the article

  • Matlab wont extract first row & column because of matrix dimensions !

    - by ZaZu
    Hey guys, I am tracking an object that is thrown in air, and this object governs a parabolic pattern. Im tracking the object through a series of 30 images. I managed to exclude all the background and keep the object apparent, then used its centroid to get its coordinates and plot them. Now im supposed to predict where the object is going to fall, so I used polyfit & polyval .. the problem is, matlab says ??? Index exceeds matrix dimensions. Now the centroid creates its own structure with a row and 2 columns. Everytime the object moves in the loop, it updates the first row only .. Here is part of the code : For N=1:30 . . . x=centroid(1,1); % extract first row and column for x y=centroid(1,2); % extract secnd row and column for x plot_xy=plot(x,y) set(plot_xy,'XData',x(1:N),'YData',y(1:N)); fitting=polyfit(x(1:N),y(1:N),2); parabola=plot(x,nan(23,1)); evaluate=polyval(fitting,x); set(parabola,'YData',evaluate) . . end The error message I get is ??? Index exceeds matrix dimensions. It seems that (1:N) is causing the problems .. I honestly do not know why .. But when I remove N, the object is plotted along with its points, but polyfitting wont work, it gives me an error saying : Warning: Polynomial is not unique; degree >= number of data points. > In polyfit at 72 If I made it (1:N-1) or something, it plots more points before it starts giving me the same error (not unique ...) . Any ideas why ?? Thanks alot !!

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • Event sourcing: Write event before or after updating the model

    - by Magnus
    I'm reasoning about event sourcing and often I arrive at a chicken and egg problem. Would be grateful for some hints on how to reason around this. If I execute all I/O-bound processing async (ie writing to the event log) then how do I handle, or sometimes even detect, failures? I'm using Akka Actors so processing is sequential for each event/message. I do not have any database at this time, instead I would persist all the events in an event log and then keep an aggregated state of all the events in a model stored in memory. Queries are all against this model, you can consider it to be a cache. Example Creating a new user: Validate that the user does not exist in model Persist event to journal Update model (in memory) If step 3 breaks I still have persisted my event so I can replay it at a later date. If step 2 breaks I can handle that as well gracefully. This is fine, but since step 2 is I/O-bound I figured that I should do I/O in a separate actor to free up the first actor for queries: Updating a user while allowing queries (A0 = Front end/GUI actor, A1 = Processor Actor, A2 = IO-actor, E = event bus). (A0-E-A1) Event is published to update user 'U1'. Validate that the user 'U1' exists in model (A1-A2) Persist event to journal (separate actor) (A0-E-A1-A0) Query for user 'U1' profile (A2-A1) Event is now persisted continue to update model (A0-E-A1-A0) Query for user 'U1' profile (now returns fresh data) This is appealing since queries can be processed while I/O-is churning along at it's own pace. But now I can cause myself all kinds of problems where I could have two incompatible commands (delete and then update) be persisted to the event log and crash on me when replayed up at a later date, since I do the validation before persisting the event and then update the model. My aim is to have a simple reasoning around my model (since Actor processes messages sequentially single threaded) but not be waiting for I/O-bound updates when Querying. I get the feeling I'm modeling a database which in itself is might be a problem. If things are unclear please write a comment.

    Read the article

  • how to implement a "soft barrier" in multithreaded c++

    - by Jason
    I have some multithreaded c++ code with the following structure: do_thread_specific_work(); update_shared_variables(); //checkpoint A do_thread_specific_work_not_modifying_shared_variables(); //checkpoint B do_thread_specific_work_requiring_all_threads_have_updated_shared_variables(); What follows checkpoint B is work that could have started if all threads have reached only checkpoint A, hence my notion of a "soft barrier". Typically multithreading libraries only provide "hard barriers" in which all threads must reach some point before any can continue. Obviously a hard barrier could be used at checkpoint B. Using a soft barrier can lead to better execution time, especially since the work between checkpoints A and B may not be load-balanced between the threads (i.e. 1 slow thread who has reached checkpoint A but not B could be causing all the others to wait at the barrier just before checkpoint B). I've tried using atomics to synchronize things and I know with 100% certainty that is it NOT guaranteed to work. For example using openmp syntax, before the parallel section start with: shared_thread_counter = num_threads; //known at compile time #pragma omp flush Then at checkpoint A: #pragma omp atomic shared_thread_counter--; Then at checkpoint B (using polling): #pragma omp flush while (shared_thread_counter > 0) { usleep(1); //can be removed, but better to limit memory bandwidth #pragma omp flush } I've designed some experiments in which I use an atomic to indicate that some operation before it is finished. The experiment would work with 2 threads most of the time but consistently fail when I have lots of threads (like 20 or 30). I suspect this is because of the caching structure of modern CPUs. Even if one thread updates some other value before doing the atomic decrement, it is not guaranteed to be read by another thread in that order. Consider the case when the other value is a cache miss and the atomic decrement is a cache hit. So back to my question, how to CORRECTLY implement this "soft barrier"? Is there any built-in feature that guarantees such functionality? I'd prefer openmp but I'm familiar with most of the other common multithreading libraries. As a workaround right now, I'm using a hard barrier at checkpoint B and I've restructured my code to make the work between checkpoint A and B automatically load-balancing between the threads (which has been rather difficult at times). Thanks for any advice/insight :)

    Read the article

  • Good object/DB set-up for CMS-esque app for managing content and user permissions?

    - by sah302
    Hi all, so I am writing a big CMS-esque app to allow users to manage web content through web applications, I've got a pretty good db-driven user permission system going, but am having trouble coming up with a good way to handle content groups and pages, I've got a couple options and not sure which one to take. Furthermore, I am not sure how to handle static page updates that have no 'widgets' in them. My current set-up for permissions is this: Objects: User, UserGroup, UserUserGroup, UserGroupType Standard many to many relationship User -> UserUserGroup <- UserGroup each Usergroup has a UserGroupType, which could be anything from Title, Department, to PermissionGroup. PermissionGroup manages the permissions. Right now on a per page basis I check permissions based on their PermissionsGroups. So for a page which has CMS features for a news widget, I check for permission groups of "Site Admin" and "News Admin". Now the issue I am coming to is, the site has many different departments involved. No problem I think, I can just have a EntityContentGroup so any widget app can be used for any departments. So my HR department, each of their news items would be in the EntityContentGroup with the news item ID, and content group of "HR" or "HR News". But maybe this isn't the most efficient way to go about it? I don't want to put the content group simply as a NewsItemType because some news items could apply to multiple areas, so I want to be able to assign them to as many areas as I want. Likewise, all of my widget apps have this, so that's why I decided to choose EntityContentGroup and not just NewsItemContentGroup. I was also thinking well instead of doing a contentGroup do a Page object that says which page some entity should be on. It seems almost like the same thing, but would I want to use Page for something else? I was thinking Page would be used for static pages with no widgets, a simple Rich Text Editor can edit the content of that page and I save that item to a page?? And then instead of doing a page level check for UserGroup permissions, would it be better to associate a usergroup to a contentgroup, and then just depending on what contentGroup content on the page is displayed, determine the permissions through that relationship? Is that better? I am not sure at this point. I guess I am just getting a tad overwhelmed at this is the largest app in scope and size that I have ever written. What is the best approach for this based on my current user permission set-up?

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >