Search Results

Search found 59205 results on 2369 pages for 'time capsule'.

Page 434/2369 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • jQuery Tools alert works once (but only once)

    - by Jim Miller
    I'm trying to build a simple alert mechanism with jQuery Tools -- in response to a bit of Javascript code, pop up an overlay with a message and an OK button that, when clicked, makes the overlay go away. Trivial, or it should be. I've been slavishly following http://flowplayer.org/tools/demos/overlay/trigger.html, and have something that works fine the first time it's invoked, but only that time. If I repeat the JS action that should expose the overlay, it doesn't. My content/DIV: <div class='modal' id='the_alert'> <div id='modal_content' class='modal_content'> <h2>hi there</h2> this is the body <p> <button class='close'>OK</button> </p> </div> <div id='modal_background' class='modal_background'><img src='/images/overlay/f9f9f9-180.png' class='stretch' alt='' /></div> </div> and the Javascript: function showOverlayDialog() { $('#the_alert').overlay({ mask: {color: '#cccccc', loadSpeed: 200, opacity: 0.9}, closeOnClick: false, load: true }); } As I said: When showOverlayDialog() is invoked the first time, the overlay appears just like it should, and goes away when the "OK" button is clicked. But if I cause showOverlayDialog() to run again, without reloading the page, nothing happens. If I reload the page, then the pattern repeats -- the first invocation brings up the overlay, but the second one doesn't. I'm obviously missing something -- any advice out there? Thanks!

    Read the article

  • Python Socket Getting Connection Reset

    - by Ian
    I created a threaded socket listener that stores newly accepted connections in a queue. The socket threads then read from the queue and respond. For some reason, when doing benchmarking with 'ab' (apache benchmark) using a concurrency of 2 or more, I always get a connection reset before it's able to complete the benchmark (this is taking place locally, so there's no external connection issue). class server: _ip = '' _port = 8888 def __init__(self, ip=None, port=None): if ip is not None: self._ip = ip if port is not None: self._port = port self.server_listener(self._ip, self._port) def now(self): return time.ctime(time.time()) def http_responder(self, conn, addr): httpobj = http_builder() httpobj.header('HTTP/1.1 200 OK') httpobj.header('Content-Type: text/html; charset=UTF-8') httpobj.header('Connection: close') httpobj.body("Everything looks good") data = httpobj.generate() sent = conn.sendall(data) def http_thread(self, id): self.log("THREAD %d: Starting Up..." % id) while True: conn, addr = self.q.get() ip, port = addr self.log("THREAD %d: responding to request: %s:%s - %s" % (id, ip, port, self.now())) self.http_responder(conn, addr) self.q.task_done() conn.close() def server_listener(self, host, port): self.q = Queue.Queue(0) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind( (host, port) ) sock.listen(5) for i in xrange(4): #thread count thread.start_new(self.http_thread, (i+1, )) while True: self.q.put(sock.accept()) sock.close() server('', 9999) When running the benchmark, I get totally random numbers of good requests before it errors out, usually between 4 and 500. Edit: Took me a while to figure it out, but the problem was in sock.listen(5). Because I was using apache benchmark with a higher concurrency (5 and up) it was causing the backlog of connections to pile up, at which point the connections started getting dropped by the socket.

    Read the article

  • Scala Interpreter scala.tools.nsc.interpreter.IMain Memory leak

    - by Peter
    I need to write a program using the scala interpreter to run scala code on the fly. The interpreter must be able to run an infinite amount of code without being restarted. I know that each time the method interpret() of the class scala.tools.nsc.interpreter.IMain is called, the request is stored, so the memory usage will keep going up forever. Here is the idea of what I would like to do: var interpreter = new IMain while (true) { interpreter.interpret(some code to be run on the fly) } If the method interpret() stores the request each time, is there a way to clear the buffer of stored requests? What I am trying to do now is to count the number of times the method interpret() is called then get a new instance of IMain when the number of times reaches 100, for instance. Here is my code: var interpreter = new IMain var counter = 0 while (true) { interpreter.interpret(some code to be run on the fly) counter = counter + 1 if (counter > 100) { interpreter = new IMain counter = 0 } } However, I still see that the memory usage is going up forever. It seems that the IMain instances are not garbage-collected by the JVM. Could somebody help me solve this issue? I really need to be able to keep my program running for a long time without restarting, but I cannot afford such a memory usage just for the scala interpreter. Thanks in advance, Pet

    Read the article

  • Maintain List of Active Users for Web

    - by Bryan Marble
    Problem Statement - Would like to know if particular web app user is active (i.e. logged in and using site) and be able to query for list of active users or determine a user's activity status. Constraints - Doesn't need to be exact (i.e. if a user was active within a certain timeframe, that's ok to say that they're active even if they've closed their browser). I feel like there should be a design pattern for this type of problem but haven't been able to find anything here or elsewhere on the web. Approaches I'm considering: Maintain a table that is updated any time a user performs an action (or some subset of actions). Would then query for users that have performed an action within some threshold of time. Try to monitor session information and maintain a table that lists logged in users and times out after a certain period of time. Some other more standard way of doing this? How would you approach this problem (again, from a design pattern perspective)? Thanks!

    Read the article

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

  • Intent.putExtras not consistent

    - by martinjd
    I have a weird situation with AlarmManager. I am scheduling an event with AlarmManager and passing in a string using intent.putExtra. The string is either silent or vibrate and when the receiver fires the phone should either turn of the ringer or set the phone to vibrate. The log statement correctly outputs the expected value each time. Intent intent; if (eventType.equals("start")) { intent = new Intent(context, SReceiver.class); } else { intent = new Intent(context, EReceiver.class); } intent.setAction(eventType+Long.toString(newId)); Log.v("EditQT",ringerModeType.toUpperCase()); intent.putExtra("ringerModeType", ringerModeType.toUpperCase()); PendingIntent appIntent = PendingIntent.getBroadcast(context, 0, intent, 0); AlarmManager alarmManager = (AlarmManager) getSystemService (Context.ALARM_SERVICE); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), appIntent); The receiver that fires when the alarm executes also has a log statement and I can see the first time around that the statement outputs the expected string either SILENT or VIBRATE. The alarm executes and then I change the value for putExtra to opposite string and the receiver still displays the previous value event though the call from the code above shows that the new value was passed in. The value for setAction is the same each time. audioManager = (AudioManager) context.getSystemService(Activity.AUDIO_SERVICE); Log.v("Start",intent.getExtras().get("ringerModeType").toString()); if (intent.getExtras().get("ringerModeType").equals("SILENTMODE")) { audioManager.setRingerMode(AudioManager.RINGER_MODE_SILENT); } else { audioManager.setRingerMode(AudioManager.RINGER_MODE_VIBRATE); } Any thoughts?

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • What is your workflow when designing HTML/CSS layouts?

    - by DMin
    I have been working with PHP/MySQL as a hobby for close to a couple of years now, I have been working with photoshop for a very long time, I know CSS & HTML well enough to write without any reference, so, I would not consider myself someone who's very new at this. I have recently started developing websites professionally - (only person working on the project). I have seen the power of Joomla and how you can make a website ready for your customer in a matter of hours(if not minutes). I find it very hard to make layouts that remotely look like the themes on joomla. I find making even simple layouts a very cumbersome process and takes a lot of time to get a good enough output. I have a feeling I may not be using the right tools or workflow for the job. What I wanted to findout was, as part of the industry : How do, you, make your website when you do it from scratch? What are the tools that you use? What is your workflow? Just noted a few things I know already, for your reference(You can skip this if you like) I have seen the export for web for Photoshop that exports CSS - but (as far as I know) exports only absolutely positioned webpages so they need to be beaten and fixed if you want to use them for example for joomla. I have used the SiteGrinder plugin for Photoshop that exports HTML/CSS. It looks promising but haven't tried it extensively. One of the tools that save loads of time is FireBug. This makes it easy to edit html and css on the fly and get the page looking exactly as you want it. Recently stumbled upon fireworks. But haven't explored it much. Thanks! :)

    Read the article

  • How can I use ToUnicode without breaking dead key support?

    - by Cypherjb
    A similar question has already been asked, so I'm not going to waste time re-explaining it, an existing discussion can be found here: http://stackoverflow.com/questions/1964614/toascii-tounicode-in-a-keyboard-hook-destroys-dead-keys The reason I'm posting a new question however is that I seem to have come across a 'solution', but I'm not quite sure how to implement it. This blog post seems to propose a solution to the problem of ToUnicode killing dead-key support: http://blogs.msdn.com/michkap/archive/2005/01/19/355870.aspx However I'm not sure how to implement the suggested solution. A push in the right direction would be greatly appreciated. To be clear, the part I'm referring to is this: "There are two ways to work around this: 1) You can keep calling ToUnicode with the same info until it is cleared out and then call it one more time to put the state back where it was if you had never typed anything, or 2) You can load all of the keyboard info ahead of time and then when they type information you can look up in your own info cache what the keystrokes mean, without having to call APIs later." I'm not quite sure how to do either of those things (keyboards and internationalization are far from my strong point), so any help would be greatly appreciated. Thanks

    Read the article

  • unix at command pass variable to shell script?

    - by Andrew
    Hi, I'm trying to setup a simple timer that gets started from a Rails Application. This timer should wait out its duration and then start a shell script that will start up ./script/runner and complete the initial request. I need script/runner because I need access to ActiveRecord. Here's my test lines in Rails output = `at #{(Time.now + 60).strftime("%H:%M")} < #{Rails.root}/lib/parking_timer.sh STRING_VARIABLE` return render :text => output Then my parking_timer.sh looks like this #!/bin/sh ~/PATH_TO_APP/script/runner -e development ~/PATH_TO_APP/lib/ParkingTimer.rb $1 echo "All Done" Finally, ParkingTimer.rb reads the passed variable with ARGV.each do|a| puts "Argument: #{a}" end The problem is that the Unix command "at" doesn't seem to like variables and only wants to deal with filenames. I either get one of two errors depending on how I position "s If I put quotes around the right hand side like so ... "~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE" I get, -bash: ~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE: No such file or directory I I leave the quotes out, I get, at: garbled time This is all happening on a Mac OS 10.6 box running Rails 2.3 & Ruby 1.8.6 I've already messed around w/ BackgrounDrb, and decided its a total PITA. I need to be able to cancel the job at any time before it is due.

    Read the article

  • Persistent warning message about "initWithDelegate"!

    - by RickiG
    Hi This is not an actual Xcode error message, it is a warning that has been haunting me for a long time. I have found no way of removing it and I think I maybe have overstepped some unwritten naming convention rule. If I build a class, most often extending NSObject, whose only purpose is to do some task and report back when it has data, I often give it a convenience constructor like "initWithDelegate". The first time I did this in my current project was for a class called ISWebservice which has a protocol like this: @protocol ISWebserviceDelegate @optional - (void) serviceFailed:(NSError*) error; - (void) serviceSuccess:(NSArray*) data; @required @end Declared in my ISWebservice.h interface, right below my import statements. I have other classes that uses a convenience constructor named "initWithDelegate". E.g. "InternetConnectionLost.h", this class does not however have its methods as optional, there are no @optional @required tags in the declaration, i.e. they are all required. Now my warning pops up every time I instantiate one of these Classes with convenience constructors written later than the ISWebservice, so when utilizing the "InternetConnectionLost" class, even though the entire Class owning the "InternetConnectionLost" object has nothing to do with the "ISWebservice" Class, no imports, methods being called, no nothing, the warning goes: 'ClassOwningInternetConnectionLost' does not implement the 'ISWebserviceDelegate' protocol I does not break anything, crash at runtime or do me any harm, but it has begun to bug me as I near release. Also, because several classes use the "initWithDelegate" constructor naming, I have 18 of these warnings in my build results and I am getting uncertain if I did something wrong, being fairly new at this language. Hope someone can shed a little light on this warning, thank you:)

    Read the article

  • Javascript/PHP and timezones

    - by James
    Hi, I'd like to be able to guess the user's timezone offset and whether or not daylight savings is being applied. Currently, the most definitive code that I've found for this is here: http://www.michaelapproved.com/articles/daylight-saving-time-dst-detect/ So this gives me the offset along with the DST indicator. Now, I want to use these in my PHP scripts in order to ouput the local date/time for the user....but what's best for this? I figure I have 2 options: a) Pick a random timezone which has the same offset and DST setting from the output of timezone_abbreviations_list(). Then call date_timezone_set() with this in order to apply the correct treatment to the time. b) Continue treating the date as UTC but just do some timestamp addition to add the appropriate number of hours on. My feeling is that option B is the best way. The reason for this is that with A, I could be using a timezone which although correct in terms of offset/dst, may have some obscur rules in place behind the scene that could give surprising results (I don't know of any but nonetheless I don't think I can rule it out). I'd then re-check the timezone using Javascript at the start of each session in order to capture when either the user's timezone changes (very unlikely) or they pass in to the DST period. Sorry for the brain dump - I'm really just after some sort of reassurance that the approaches above are valid. Thanks, James.

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

  • IList<Item> Collection Class accessing database

    - by Mike
    Hi, I have a database with Users. Users have Items. These Items can change actively. How do you access the items in a collection type format? For the user, I fill all the user properties at the time of instantiation. If I load the user's items at the time of the instantiation, and the items change, they will have old data. I was thinking, maybe I need an ItemCollection class and have that a field/property apart of the user class, that way to traverse all the user's items I could use a foreach loop. So, my question is, what is the best practice/best way of accessing the items from a database using some sort of collection? On accessing the particular Item, it needs to get the latest database information, and when the user does do a foreach loop, the latest item information must be available. I.e. What I'm trying to do Console.WriteLine(User.Items[3].ID); returns 5. //this updates the item information and saves it to the database. User.Items[3].ID = 13; //Add a new item to the database. User.Items.Add(new Item { id = 17}); foreach (Item item in User.Items) { //this would traverse all items in the database. //not some cached copy at the time of instantiation of the user. }

    Read the article

  • Flex 3: should I provide prepared data to my component or make it to process data before display?

    - by grapkulec
    I'm starting to learn a little Flex just for fun and maybe to prove that I still can learn something new :) I have some idea for a project and one of its parts is a tree component which could display data in different ways depending on configuration. The idea There is list of objects having properties like id, date, time, name, description. And sometimes list should be displayed like this: first level: date second level: time third level: name and sometimes like this: first level: year second level: month third level: day fourth level: time and name By level I mean level of nesting of course. So, we can have years, that have months, that have days, that have hours and so forth. The problem What could be the best way to do it? I mean, should I prepare data for different ways of nesting outside of component or even outside of flex? I can do it at web service level in C# where I plan to have database access layer and send to flex nice and ready to display XML or array of objects. But I wonder if that won't cause additional and maybe unneccessary network traffic. I tried to hack some code in my component to convert my data objects into XML or ArrayCollection but I don't know enough of Flex and got stuck on elimination of duplicates or getting specific data by some key value. Usually to do such things I have STL with maps, sets and vectors and I find Flex arrays and even Dictionary a little bit confusing (I've read language reference and googled without any significant luck). The question So, to sum things up: should I give my tree component data prepared just for chosen type of display or should I try to do it internally inside component (or some helper class written in ActionScript)?

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • R: disentangling scopes

    - by rescdsk
    Hi, Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions. I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined. I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging! So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace. Any tips? Thank you!

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Escaping single quote in PHP when inserting into MySQL

    - by hairdresser-101
    PLEASE NOTE: I am reposting as the original was not answered correctly... I AM LOOKING FOR THE WHY - NOT THE SOLUTION - I KNOW THE SOLUTION, WHAT I DON'T UNDERSTAND IS THE WHY! I have a perplexing issue that I can't seem to comprehend... I'm hoping someone here might be able to point me in the right direction... I have two SQL statements: - the first enters information from a form into the database. - the second takes data from the database entered above, sends an email and then logs the details of the transaction The problem is that it a appears that a single quote is triggering a MySQL error on the second entry only!!! The first instance works without issue but the second instance triggers the mysql_error(). Does the data from a form get handled differently from the data captured in a form? Query#1 - This works without issue (and without escaping the single quote) $result = mysql_query("INSERT INTO job_log (order_id, supplier_id, category_id, service_id, qty_ordered, customer_id, user_id, salesperson_ref, booking_ref, booking_name, address, suburb, postcode, state_id, region_id, email, phone, phone2, mobile, delivery_date, stock_taken, special_instructions, cost_price, cost_price_gst, sell_price, sell_price_gst, ext_sell_price, retail_customer, created, modified, log_status_id) VALUES ('$order_id', '$supplier_id', '$category_id', '{$value['id']}', '{$value['qty']}', '$customer_id', '$user_id', '$salesperson_ref', '$booking_ref', '$booking_name', '$address', '$suburb', '$postcode', '$state_id', '$region_id', '$email', '$phone', '$phone2', '$mobile', STR_TO_DATE('$delivery_date', '%d/%m/%Y'), '$stock_taken', '$special_instructions', '$cost_price', '$cost_price_gst', '$sell_price', '$sell_price_gst', '$ext_sell_price', '$retail_customer', '".date('Y-m-d H:i:s', time())."', '".date('Y-m-d H:i:s', time())."', '1')"); Query#2 - This fails when entering a name with a single quote (i.e. O'Brien) $query = mysql_query("INSERT INTO message_log (order_id, timestamp, message_type, email_from, supplier_id, primary_contact, secondary_contact, subject, message_content, status) VALUES ('$order_id', '".date('Y-m-d H:i:s', time())."', '$email', '$from', '$row->supplier_id', '$row->primary_email' ,'$row->secondary_email', '$subject', '$message_content', '1')");

    Read the article

  • Escaping single quote in PHP when inserting into MySQL

    - by hairdresser-101
    I have a perplexing issue that I can't seem to comprehend... I'm hoping someone here might be able to point me in the right direction... I have two SQL statements: - the first enters information from a form into the database. - the second takes data from the database entered above, sends an email and then logs the details of the transaction The problem is that it a appears that a single quote is triggering a MySQL error on the second entry only!!! The first instance works without issue but the second instance triggers the mysql_error(). Does the data from a form get handled differently from the data captured in a form? Query#1 - This works without issue (and without escaping the single quote) $result = mysql_query("INSERT INTO job_log (order_id, supplier_id, category_id, service_id, qty_ordered, customer_id, user_id, salesperson_ref, booking_ref, booking_name, address, suburb, postcode, state_id, region_id, email, phone, phone2, mobile, delivery_date, stock_taken, special_instructions, cost_price, cost_price_gst, sell_price, sell_price_gst, ext_sell_price, retail_customer, created, modified, log_status_id) VALUES ('$order_id', '$supplier_id', '$category_id', '{$value['id']}', '{$value['qty']}', '$customer_id', '$user_id', '$salesperson_ref', '$booking_ref', '$booking_name', '$address', '$suburb', '$postcode', '$state_id', '$region_id', '$email', '$phone', '$phone2', '$mobile', STR_TO_DATE('$delivery_date', '%d/%m/%Y'), '$stock_taken', '$special_instructions', '$cost_price', '$cost_price_gst', '$sell_price', '$sell_price_gst', '$ext_sell_price', '$retail_customer', '".date('Y-m-d H:i:s', time())."', '".date('Y-m-d H:i:s', time())."', '1')"); Query#2 - This fails when entering a name with a single quote (i.e. O'Brien) $query = mysql_query("INSERT INTO message_log (order_id, timestamp, message_type, email_from, supplier_id, primary_contact, secondary_contact, subject, message_content, status) VALUES ('$order_id', '".date('Y-m-d H:i:s', time())."', '$email', '$from', '$row->supplier_id', '$row->primary_email' ,'$row->secondary_email', '$subject', '$message_content', '1')");

    Read the article

  • Simple ASP.NET MVC views without writing a controller

    - by Jake Stevenson
    We're building a site that will have very minimal code, it's mostly just going to be a bunch of static pages served up. I know over time that will change and we'll want to swap in more dynamic information, so I've decided to go ahead and build a web application using ASP.NET MVC2 and the Spark view engine. There will be a couple of controllers that will have to do actual work (like in the /products area), but most of it will be static. I want my designer to be able to build and modify the site without having to ask me to write a new controller or route every time they decide to add or move a page. So if he wants to add a "http://mysite.com/News" page he can just create a "News" folder under Views and put an index.spark page within it. Then later if he decides he wants a /News/Community page, he can drop a community.spark file within that folder and have it work. I'm able to have a view without a specific action by making my controllers override HandleUnknownAction, but I still have to create a controller for each of these folders. It seems silly to have to add an empty controller and recompile every time they decide to add an area to the site. Is there any way to make this easier, so I only have to write a controller and recompile if there's actual logic to be done? Some sort of "master" controller that will handle any requests where there was no specific controller defined?

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >