Search Results

Search found 15021 results on 601 pages for 'absolutely free'.

Page 519/601 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • HTTP POST prarameters order / REST urls

    - by pq
    Let's say that I'm uploading a large file via a POST HTTP request. Let's also say that I have another parameter (other than the file) that names the resource which the file is updating. The resource cannot be not part of the URL the way you can do it with REST (e.g. foo.com/bar/123). Let's say this is due to a combination of technical and political reasons. The server needs to ignore the file if the resource name is invalid or, say, the IP address and/or the logged in user are not authorized to update the resource. Looks like, if this POST came from an HTML form that contains the resource name first and file field second, for most (all?) browsers, this order is preserved in the POST request. But it would be naive to fully rely on that, no? In other words the order of HTTP parameters is insignificant and a client is free to construct the POST in any order. Isn't that true? Which means that, at least in theory, the server may end up storing the whole large file before it can deny the request. It seems to me that this is a clear case where RESTful urls have an advantage, since you don't have to look at the POST content to perform certain authorization/error checking on the request. Do you agree? What are your thoughts, experiences?

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • I write barely functional scripts that tend to not be resuable and make the baby jesus cry. Please h

    - by maxxpower
    I received a request to add around 100 users to a linux box the users are already in ldap so I can't just use newusers and point it at a text file. Another admin is taking care of the ldap piece so all I have to do is create all the home directories and chown them to the correct user once he adds the users to the box. creating the directories isn't a problem, but I'd like a more elegant script for chowning them to the correct user. what I have currently basically looks like chown -R testuser1 testgroup1 /home/tetsuser1; chown -R testuser2 testgroup2 /home/testgroup2; chown -R testsuser3 testgroup1 /home/testuser3 bascially I took the request that the user name and group name popped it into excel added a column of "chown -R" to the front, then added a column of "/", copied and pasted the username column after it and then added a column of ";" and dragged it down to the second to last row. Popped it into notepad ran some quick find and replaces and in less than a minute I have a completed request and a sad empty feeling. I know this was a really ghetto method and I'm trying to get away from using excel to avoid learning new scripting techniques so here's my real question. tl;dr I made 100 home directories and chowned them to the correct users, but it was ugly. Actual question below. You have a file named idlist that looks like this (only with say 1000 users and real usernames and groups) testuser1 testgroup1 testuser2 testgroup2 testuser3 testgroup1 write a script that creates home directories for all the users and chowns the created directories to the correct user and group. To make the directories I used the following(feel free to flame/correct me on this as well. ) var= 'cut -f1 -d" " idlist' (I used backticks not apostrophes around the cut command) mkdir $var

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • 1. I fill out a form & click submit. 2. I get the results page. Goal: Get the same results without f

    - by Chris
    This is my first time posting - I greatly appreciate any and all guidance on this subject. Background: I am building a Real Estate web site. I would like to use the free IDX data provided by my local MLS board. The MLS board does not allow me the option of displaying a predefined search and only provides me with a link to the search field. after filling out the search field, I am able to view the results. Goal: I would like to bypass this step and frame the results page into a GoDaddy website I am building, which supports HTML. Here is a link to the search page: http://fgcmls.rapmls.com/scripts/mgrqispi.dll?APPNAME=Fortmyers&PRGNAME=MLSLogin&ARGUMENT=vBSJvLQtMcbg7F0O0KnXDiggv%2F12B0S6Ss9wv4510QA%3D&KeyRid=1 I am trying to only show the listings that appear in my neighborhood. Options include: 1. Property Type - Residential 2. GEO Area - FM11 3. Developments: Fiddlesticks Country Club Once this criteria is entered, I have the page needed to make this project work. Thank all of you for taking the time to read this and for the time you spend helping me out. Best regards, Chris

    Read the article

  • Algorithm possible amounts (over)paid for a specific price, based on denominations

    - by Wrikken
    In a current project, people can order goods delivered to their door and choose 'pay on delivery' as a payment option. To make sure the delivery guy has enough change customers are asked to input the amount they will pay (e.g. delivery is 48,13, they will pay with 60,- (3*20,-)). Now, if it were up to me I'd make it a free field, but apparantly higher-ups have decided is should be a selection based on available denominations, without giving amounts that would result in a set of denominations which could be smaller. Example: denominations = [1,2,5,10,20,50] price = 78.12 possibilities: 79 (multitude of options), 80 (e.g. 4*20) 90 (e.g. 50+2*20) 100 (2*50) It's international, so the denominations could change, and the algorithm should be based on that list. The closest I have come which seems to work is this: for all denominations in reversed order (large=>small) add ceil(price/denomination) * denomination to possibles baseprice = floor(price/denomination) * denomination; for all smaller denominations as subdenomination in reversed order add baseprice + (ceil((price - baseprice) / subdenomination) * subdenomination) to possibles end for end for remove doubles sort Is seems to work, but this has emerged after wildly trying all kinds of compact algorithms, and I cannot defend why it works, which could lead to some edge-case / new countries getting wrong options, and it does generate some serious amounts of doubles. As this is probably not a new problem, and Google et al. could not provide me with an answer save for loads of pages calculating how to make exact change, I thought I'd ask SO: have you solved this problem before? Which algorithm? Any proof it will always work?

    Read the article

  • What is the best software design to use in this scenario

    - by domdefelice
    I need to generate HTML snippets using jQuery. The creation of those snippets depends on some data. The data is stored server-side, in session (where PHP is used). At the moment I achieved this - retrieving the data from the server via AJAX in form of JSON - and building the snippets via specific javascript functions that read those data The problem is that the complexity of the data is getting bigger and hence the serialization into JSON is getting even more difficult since I can't do it automatically. I can't do it automatically because some information are sensible so I generate a "stripped" version to send to the client. I know it is difficult to understand without any code to read, but I am hoping this is a common scenario and would be glad for any tip, suggestion or even design-pattern you can give me. Should I store both a complete and a stripped data on the server and then use some library to automatically generate the JSON from the stripped data? But this also means I have to get the two data synchronized. Or maybe I could move the logic server-side, this way avoiding sending the data. But this means sending javascript code (since I rely on jQuery). Maybe not a good idea. Feel free to ask me more details if this is not clear. Thank you for any help

    Read the article

  • If attacker has original data and encrypted data, can they determine the passphrase?

    - by Brad Cupit
    If an attacker has several distinct items (for example: e-mail addresses) and knows the encrypted value of each item, can the attacker more easily determine the secret passphrase used to encrypt those items? Meaning, can they determine the passphrase without resorting to brute force? This question may sound strange, so let me provide a use-case: User signs up to a site with their e-mail address Server sends that e-mail address a confirmation URL (for example: https://my.app.com/confirmEmailAddress/bill%40yahoo.com) Attacker can guess the confirmation URL and therefore can sign up with someone else's e-mail address, and 'confirm' it without ever having to sign in to that person's e-mail account and see the confirmation URL. This is a problem. Instead of sending the e-mail address plain text in the URL, we'll send it encrypted by a secret passphrase. (I know the attacker could still intercept the e-mail sent by the server, since e-mail are plain text, but bear with me here.) If an attacker then signs up with multiple free e-mail accounts and sees multiple URLs, each with the corresponding encrypted e-mail address, could the attacker more easily determine the passphrase used for encryption? Alternative Solution I could instead send a random number or one-way hash of their e-mail address (plus random salt). This eliminates storing the secret passphrase, but it means I need to store that random number/hash in the database. The original approach above does not require storage in the database. I'm leaning towards the the one-way-hash-stored-in-the-db, but I still would like to know the answer: does having multiple unencrypted e-mail addresses and their encrypted counterparts make it easier to determine the passphrase used?

    Read the article

  • What is the purpose of the garbage (files) that Qt Creator auto-generates and how can I tame them?

    - by Venemo
    Hello Everyone, I'm fairly new to Qt, and I'm using the new Nokia Qt SDK beta and I'm working to develop a small application for my Nokia N900 in my free time. Fortunately, I was able to set up everything correctly, and also to run my app on the device. I've learned C++ in school, so I thought it won't be so difficult. I use Qt Creator as my IDE, because it doesn't work with Visual Studio. I also wish to port my app to Symbian, so I have run the emulator a few times, and I also compile for Windows to debug the most evil bugs. (The debugger doesn't work correctly on the device.) I come from a .NET background, so there are some things that I don't understand. When I hit the build button, Qt Creator generates a bunch of files to my project directory: moc_*.cpp files - I don't know their purpose. Could someone tell me? *.o files - I assume these are the object code *.rss files - I don't know their purpose, but they definitely don't have anything to do with RSS Makefile and Makefile.Debug - I have no idea AppName (without extension) - the executable for Maemo, and AppName.sis - the executable for Symbian, I guess? AppName.loc - I have no idea AppName_installer.pkg and AppName_template.pkg - I have no idea qrc_Resources.cpp - I guess this is for my Qt resources (where AppName is the name of the application in question) I noticed that these files can be safely deleted, Qt Creator simply regenerates them. The problem is that they pollute my source directory. Especially because I use version control, and if they can be regenerated, there is no point in uploading them to SVN. So, could someone please tell me what the exact purpose of these files is, and how can I ask Qt Creator to place them into another directory? Thank you in advance!

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime [closed]

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • .NET Thread.Abort again

    - by hoodoos
    Again I want to talk about safety of the Thread.Abort function. I was interested to have some way to abort operations which I can't control really and don't want actually, but I want to have my threads free as soon as possible to prevent thread thirsty of my application. So I wrote some test code to see if it's possible to use Thread.Abort and have the aborting thread clean up resources propertly. Here's code: int threadRunCount = 0; int threadAbortCount = 0; int threadFinallyCount = 0; int iterations = 0; while( true ) { Thread t = new Thread( () => { threadRunCount++; try { Thread.Sleep( Random.Next( 45, 55 ) ); } catch( ThreadAbortException ) { threadAbortCount++; } finally { threadFinallyCount++; } } ); t.Start(); Thread.Sleep( 45 ); t.Abort(); iterations++; } So, so far this code worked for about 5 mins, and threadRunCount was always equal to threadFinally and threadAbort was somewhat lower in number, because some threads completed with no abort or probably got aborted in finally. So the question is, do I miss something?

    Read the article

  • How are Scala closures transformed to Java objects?

    - by iguana
    I'm currently looking at closure implementations in different languages. When it comes to Scala, however, I'm unable to find any documentation on how a closure is mapped to Java objects. It is well documented that Scala functions are mapped to FunctionN objects. I assume that the reference to the free variable of the closure must be stored somewhere in that function object (as it is done in C++0x, e.g.). I also tried compiling the following with scalac and then decompiling the class files with JD: object ClosureExample extends Application { def addN(n: Int) = (a: Int) => a + n var add5 = addN(5) println(add5(20)) } In the decompiled sources, I see an anonymous subtype of Function1, which ought to be my closure. But the apply() method is empty, and the anonymous class has no fields (which could potentially store the closure variables). I suppose the decompiler didn't manage to get the interesting part out of the class files... Now to the questions: Do you know how the transformation is done exactly? Do you know where it is documented? Do you have another idea how I could solve the mystery?

    Read the article

  • How do I print an HTML document from a web service?

    - by Chris Marasti-Georg
    I want to print HTML from a C# web service. The Web Browser control is overkill, and does not function well in a service-environment, nor does it function well on a system with very tight security constraints. Is there any sort of free .NET library that will support the printing of a basic HTML page? Here is the code I have so far, that is not running properly. public void PrintThing(string document) { if (Thread.CurrentThread.GetApartmentState() != ApartmentState.STA) { Thread thread = new Thread((ThreadStart) delegate { PrintDocument(document); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); } else { PrintDocument(document); } } protected void PrintDocument(string document) { WebBrowser browser = new WebBrowser(); browser.DocumentText = document; while (browser.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); } browser.Print(); } This works fine when called from UI-type threads, but nothing happens when called from a service-type thread. Changing Print() to ShowPrintPreviewDialog() yields the following IE script error: Error: 'dialogArguments.___IE_PrintType' is null or not an object URL: res://ieframe.dll/preview.dlg And a small empty print preview dialog appears.

    Read the article

  • How do I Extend Blogengine.Net to collect statistics of visitors?

    - by Stefan
    I love BlogEngine. But from what I can se it does not collect the standard information about the visitors I would like to see (referrer, browser-type and so on). When I log in as Admin I have a menu item named "Referrer". I can choose a weekday and then I'll be presented with 1 or 2 rows with "google.com 4 hits, "itmaskinen.se 6 hits" and so on, But that's not what I want to se, I want to se where my visitors come from, country, IP if possible, how many visitors and so on. If someone of you are familiar with Blogengine.Net and can point me in the right direction to where I would put my own log-code or if you know any visitor-statistic-extension that can do it for me, I would be really happy to know. I prefer an extension, because if I make changes myself to BlogEngine it may break later updates I install. Blogengine.Net is a blog software made in .Net found here: http://www.dotnetblogengine.net/ And yes, I prefer to take this question here rather then in the Blogengine.Net forum, you know why. ;) (Anyone, feel free to edit my (bad) english in this post and after that delete this sentence)

    Read the article

  • C#+BDE+DBF problem

    - by Drabuna
    I have huge problem: I have lots of .dbf files(~50000) and I need to import them into Oracle database. I open conncection like this: OleDbConnection oConn = new OleDbConnection(); OleDbCommand oCmd = new OleDbCommand(); oConn.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + directory + ";Extended Properties=dBASE IV;User ID=Admin;Password="; oCmd.Connection = oConn; oCmd.CommandText = @"SELECT * FROM " + tablename; try { oConn.Open(); resultTable.Load(oCmd.ExecuteReader()); } catch (Exception ex) { MessageBox.Show(ex.Message); } oConn.Close(); oCmd.Dispose(); oConn.Dispose(); I read them in loop, and then insert into oracle. Everything's fine. BUT: There is about 1000 files, that I can't open. They raise exception "not a table". So I google, and install Borland Database Engine. Now everything wokrs fine....but no. Now, when I'm reading files, on 1024 file exception raises: "System resource exceeded". But I have lots of free resources. When I remove BDE, everything's fine again, no "system resource exceeded" error, but I cant read all files. Help please. PS: Tried using ODBC but nothing changes.

    Read the article

  • Renaming TurboGears 2's Repoze Fields with TGAdmin

    - by William Chambers
    I've been working on renaming TurboGears 2's Repoze 'groups' field to 'roles' to free the namespace and db tables for other purposes. Also roles makes much more sense to me then groups because I have a strong Drupal background. Now I have found some of the docs to do this such as these: http://www.turbogears.org/2.1/docs/main/Auth/Customization.html#customizing-the-model-structure-assumed-by-the-quickstart http://code.gustavonarea.net/repoze.what-quickstart/#customizing-the-model-definition However these only go part of the way. I have made (I'm pretty sure at least, I've double checked a few times.) all the changes required as you can see in this diff. This seems to work fine however I've ran into a rather major issue with the TurboGears Admin system. I've tried http://turbogears.org/2.0/docs/main/Extensions/Admin/index.html and it didn't seem to make any difference, however I'm not 100% sure I did it correctly. The problem occurs when I attempt to go to localhost/admin/permissions/. It causes a Internal Server Error and outputs the following error. http://pastebin.com/YWMH3SiU This error does not happen on the Roles/Users pages and the permissions /edit/1 also works. I'm running kubuntu 10.04 with TG 2.1b2. (I'm running the beta mostly for easier mako support which is really important.) Any help would be very appreciated.

    Read the article

  • Processing RSS/RDF via xml.dom.minidom

    - by Bill
    I'm trying to process a delicious rss feed via python. Here's a sample: ... <item rdf:about="http://weblist.me/"> <title>WebList - The Place To Find The Best List On The Web</title> <dc:date>2009-12-24T17:46:14Z</dc:date> <link>http://weblist.me/</link> ... </item> <item rdf:about="http://thumboo.com/"> <title>Thumboo! Free Website Thumbnails and PHP Script to Generate Web Screenshots</title> <dc:date>2006-10-24T18:11:32Z</dc:date> <link>http://thumboo.com/</link> ... The relevant code is: def getText(nodelist): rc = "" for node in nodelist: if node.nodeType == node.TEXT_NODE: rc = rc + node.data return rc dom = xml.dom.minidom.parse(file) items = dom.getElementsByTagName("item") for i in items: title = i.getElementsByTagName("title") print getText(title) I would think this would print out each title, but instead I get basically get blank output. I'm sure I'm doing something stupid wrong, but no idea what?

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • Concurrency and Coordination Runtime (CCR) Learning Resources

    - by Harry
    I have recently been learning the in's and out's of the Concurrency and Coordination Runtime (CCR). Finding good learning resources for this relatively new technology has been quite difficult. (A quick google search brings up "Creedence Clearwater Revival" as the top result!) Some of the resources I have found: Free e-book chapter from WROX on the Robotics Developer Studio Good Article/post on InfoQ Robotic's Member blog Very active MSDN CCR Forum - Got plenty of help from here! Great MSDN Magazine by Jeffrey Richter Official CCR User Guide - Didn't find this very helpful Great blogging series on CCR iodyner CCR Related Blog - Update: Moved to here Eight or so Videos on Channel9.msdn.com CCR Patterns page on MS Robotics Studio - I haven't read this yet 4 x CCR Questions on Stackoverflow - Most of the questions have been Mine! LOL CCR and DSS toolkit has now been released to MSDN Members Do you have any good learning resources for the CCR? I really hope that Microsoft will publish more material, so far it has been too Robotics specific. I believe that MS needs to acknowledge that most people are using the CCR in issolation from the DSS and Robotics Studio. Update The Mix 2010 conference had a presentation by Myspace about how they have used the CCR framework in their middle tier. They also open sourced the code base. MySpace DataRelay Mix Video Presentation

    Read the article

  • Memory usage in Flash / Flex / AS3

    - by ggambett
    I'm having some trouble with memory management in a flash app. Memory usage grows quite a bit, and I've tracked it down to the way I load assets. I embed several raster images in a class Embedded, like this [Embed(source="/home/gabriel/text_hard.jpg")] public static var ASSET_text_hard_DOT_jpg : Class; I then instance the assets this way var pClass : Class = Embedded[sResource] as Class; return new pClass() as Bitmap; At this point, memory usage goes up, which is perfectly normal. However, nulling all the references to the object doesn't free the memory. Based on this behavior, looks like the flash player is creating an instance of the class the first time I request it, but never ever releases it - not without references, calling System.gc(), doing the double LocalConnection trick, or calling dispose() on the BitmapData objects. Of course, this is very undesirable - memory usage would grow until everything in the SWFs is instanced, regardless of whether I stopped using some asset long ago. Is my analysis correct? Can anything be done to fix this?

    Read the article

  • i386 assembly question: why do I need to meddle with the stack pointer?

    - by zneak
    Hello everyone, I decided it would be fun to learn x86 assembly during the summer break. So I started with a very simple hello world program, borrowing on free examples gcc -S could give me. I ended up with this: HELLO: .ascii "Hello, world!\12\0" .text .globl _main _main: pushl %ebp # 1. puts the base stack address on the stack movl %esp, %ebp # 2. puts the base stack address in the stack address register subl $20, %esp # 3. ??? pushl $HELLO # 4. push HELLO's address on the stack call _puts # 5. call puts xorl %eax, %eax # 6. zero %eax, probably not necessary since we didn't do anything with it leave # 7. clean up ret # 8. return # PROFIT! It compiles and even works! And I think I understand most of it. Though, magic happens at step 3. Would I remove this line, my program would die between the call to puts and the xor from a misaligned stack error. And would I change $20 to another value, it'd crash too. So I came to the conclusion that this value is very important. Problem is, I don't know what it does and why it's needed. Can anyone explain me? (I'm on Mac OS, would it ever matter.)

    Read the article

  • Delphi: Alternative to using Reset/ReadLn for text file reading

    - by Ian Boyd
    i want to process a text file line by line. In the olden days i loaded the file into a StringList: slFile := TStringList.Create(); slFile.LoadFromFile(filename); for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; Problem with that is once the file gets to be a few hundred megabytes, i have to allocate a huge chunk of memory; when really i only need enough memory to hold one line at a time. (Plus, you can't really indicate progress when you the system is locked up loading the file in step 1). The i tried using the native, and recommended, file I/O routines provided by Delphi: var f: TextFile; begin Reset(f, filename); while ReadLn(f, oneLine) do begin //process the line end; Problem withAssign is that there is no option to read the file without locking (i.e. fmShareDenyNone). The former stringlist example doesn't support no-lock either, unless you change it to LoadFromStream: slFile := TStringList.Create; stream := TFileStream.Create(filename, fmOpenRead or fmShareDenyNone); slFile.LoadFromStream(stream); stream.Free; for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; So now even though i've gained no locks being held, i'm back to loading the entire file into memory. Is there some alternative to Assign/ReadLn, where i can read a file line-by-line, without taking a sharing lock? i'd rather not get directly into Win32 CreateFile/ReadFile, and having to deal with allocating buffers and detecting CR, LF, CRLF's. i thought about memory mapped files, but there's the difficulty if the entire file doesn't fit (map) into virtual memory, and having to maps views (pieces) of the file at a time. Starts to get ugly. i just want Reset with fmShareDenyNone!

    Read the article

  • remove specific values from multi value dictionary

    - by Anthony
    I've seen posts here on how to make a dictionary that has multiple values per key, like one of the solutions presented in this link: Multi Value Dictionary it seems that I have to use a List< as the value for the keys, so that a key can store multiple values. the solution in the link is fine if you want to add values. But my problem now is how to remove specific values from a single key. I have this code for adding values to a dictionary: private Dictionary<TKey, List<TValue>> mEventDict; // this is for initializing the dictionary public void Subscribe(eVtEvtId inEvent, VtEvtDelegate inCallbackMethod) { if (mEventDict.ContainsKey(inEvent)) { mEventDict[inEvent].Add(inCallbackMethod); } else { mEventDict.Add(inEvent, new List<TValue>() { v }); } } // this is for adding values to the dictionary. // if the "key" (inEvent) is not yet present in the dictionary, // the key will be added first before the value my problem now is removing a specific value from a key. I have this code: public void Unsubscribe(eVtEvtId inEvent, VtEvtDelegate inCallbackMethod) { try { mEventDict[inEvent].Remove(inCallbackMethod); } catch (ArgumentNullException) { MessageBox.Show("The event is not yet present in the dictionary"); } } basically, what I did is just replace the Add() with Remove() . Will this work? Also, if you have any problems or questions with the code (initialization, etc.), feel free to ask. Thanks for the advice.

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • Replacement for deprecated SQL Server User Defined Type with a bound Rule and Default

    - by Adam Jones
    We have a User Defined Data Type of YesNo which has an which is an alias for char(1). The type has a bound Rule (must be Y or N) and a Default (N). The aim of this is that when any of the development team create a new field of type YesNo the rule and default are automatically bound to the new column. Rules and Defaults have been deprecated and won't be available in the next a future version of SQL Server, is there another way to achieve the same functionality? I should add that I'm aware that I could use CHECK and DEFAULT constraints to replicate the functionality of the bound Rule and Defalut objects, however these would have to be applied at each usage of the type, rather than getting the functionality 'for free' by using a UDT which has a bound Rule and Default. The post relates to a database that backs an existing application, rather than a new development, so I'm aware that our use of UDT's is less than optimal. I suspect the answer to the question is 'No', however normally when features are deprecated there's usually an alternative syntax that can be used as a drop in replacement so I wanted to pose the question in-case someone knew of an alternative.

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >