Search Results

Search found 11972 results on 479 pages for 'writing'.

Page 372/479 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • How can I write System preferences with Java? Can I invoke UAC?

    - by Jonas
    How can I write system preferences with Java, using Preferences.systemRoot()? I tried with: Preferences preferences = Preferences.systemRoot(); preferences.put("/myapplication/databasepath", pathToDatabase); But I got this error message: 2010-maj-29 19:02:50 java.util.prefs.WindowsPreferences openKey VARNING: Could not open windows registry node Software\JavaSoft\Prefs at root 0x80000002. Windows RegOpenKey(...) returned error code 5. Exception in thread "AWT-EventQueue-0" java.lang.SecurityException: Could not open windows registry node Software\JavaSoft\Prefs at root 0x80000002: Access denied at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.putSpi(Unknown Source) at java.util.prefs.AbstractPreferences.put(Unknown Source) at biz.accountia.pos.install.Setup$2.actionPerformed(Setup.java:43) I would like to do this, because I want to install an embedded JavaDB database, and let multiple users on the computer to use the same database with the application. How to solve this? Can I invoke UAC and do this as Administrator from Java? And if I log in as Administrator when writing, can I read the values with my Java application if I'm logged in as a User?

    Read the article

  • Outgoing UDP sniffer in python?

    - by twneale
    I want to figure out whether my computer is somehow causing a UDP flood that is originating from my network. So that's my underlying problem, and what follows is simply my non-network-person attempt to hypothesize a solution using python. I'm extrapolating from recipe 13.1 ("Passing Messages with Socket Datagrams") from the python cookbook (also here). Would it possible/sensible/not insane to try somehow writing an outgoing UDP proxy in python, so that outgoing packets could be logged before being sent on their merry way? If so, how would one go about it? Based on my quick research, perhaps I could start a server process listening on suspect UDP ports and log anything that gets sent, then forward it on, such as: import socket s =socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.bind(("", MYPORT)) while True: packet = dict(zip('data', 'addr'), s.recvfrom(1,024)) log.info("Recieved {data} from {addr}.".format(**packet)) But what about doing this for a large number of ports simultaneously? Impractical? Are there drawbacks or other reasons not to bother with this? Is there a better way to solve this problem (please be gentle).

    Read the article

  • Custom Calculations in a Matrix - Reporting Services 2005

    - by bfrancis
    I am writing a report to show gas usage (in gallons) used by each department. The request is to view each month and the gallons used by each department. A column is required to display what each departments target goal is, based on the gallons of gas they have used in a past time frame. Each departments target goal is x percent less than the total gallons used for said time frame. I currently have a matrix in Reporting Services with departments making up rows, months making up columns, and gallons filling the details. The matrix is being filled by dataset1. I have the data grouping as is requested for each month by each department. My problem is calculating the target goal. My thought was to create a second dataset (dataset2) that returns the gallons used based on the time frame requested. I grouped this data by department. I was hoping I could use the department field in each dataset to make sure the appropriate numbers were used. I added a new column which shows up next to the gallons field. As I attempted to build the Expression I found out that I could only grab the gallons used from dataset2 if I was summing the gallons field. This gives me the total gallons used by every department combined. I have tried to find resources with similar examples of what I am trying to accomplish but I cannot seem to come across one. I am trying to keep this as detailed as possible without making it too wordy. I would be more than happy to clarify or explain into further detail what I have written above if it is needed. If anyone has links, comments, or suggestions they would be greatly appreciated. A very simple visual or what I am hoping to accomplish is below. The months and departments would expand based on the data returned. months ------------------------------ departments| gallons/month | target goal

    Read the article

  • How to exit an if clause

    - by Roman Stolper
    What sorts of methods exist for prematurely exiting an if clause? There are times when I'm writing code and want to put a break statement inside of an if clause, only to remember that those can only be used for loops. Lets take the following code as an example: if some_condition: ... if condition_a: # do something # and then exit the outer if block ... if condition_b: # do something # and then exit the outer if block # more code here I can think of one way to do this: assuming the exit cases happen within nested if statements, wrap the remaining code in a big else block. Example: if some_condition: ... if condition_a: # do something # and then exit the outer if block else: ... if condition_b: # do something # and then exit the outer if block else: # more code here The problem with this is that more exit locations mean more nesting/indented code. Alternatively, I could write my code to have the if clauses be as small as possible and not require any exits. Does anyone know of a good/better way to exit an if clause? If there are any associated else-if and else clauses, I figure that exiting would skip over them.

    Read the article

  • Is there a good way to QuickCheck Happstack.State methods?

    - by Paul Kuliniewicz
    I have a set of Happstack.State MACID methods that I want to test using QuickCheck, but I'm having trouble figuring out the most elegant way to accomplish that. The problems I'm running into are: The only way to evaluate an Ev monad computation is in the IO monad via query or update. There's no way to create a purely in-memory MACID store; this is by design. Therefore, running things in the IO monad means there are temporary files to clean up after each test. There's no way to initialize a new MACID store except with the initialValue for the state; it can't be generated via Arbitrary unless I expose an access method that replaces the state wholesale. Working around all of the above means writing methods that only use features of MonadReader or MonadState (and running the test inside Reader or State instead of Ev. This means forgoing the use of getRandom or getEventClockTime and the like inside the method definitions. The only options I can see are: Run the methods in a throw-away on-disk MACID store, cleaning up after each test and settling for starting from initialValue each time. Write the methods to have most of the code run in a MonadReader or MonadState (which is more easily testable), and rely on a small amount of non-QuickCheck-able glue around it that calls getRandom or getEventClockTime as necessary. Is there a better solution that I'm overlooking?

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Is the Windows dev environment worth the cost?

    - by MCS
    I recently made the move from Linux development to Windows development. And as much of a Linux enthusiast that I am, I have to say - C# is a beautiful language, Visual Studio is terrific, and now that I've bought myself a trackball my wrist has stopped hurting from using the mouse so much. But there's one thing I can't get past: the cost. Windows 7, Visual Studio, SQL Server, Expression Blend, ViEmu, Telerik, MSDN - we're talking thousands for each developer on the project! You're definitely getting something for your money - my question is, is it worth it? [Not every developer needs all the aforementioned tools - but have you ever heard of anyone writing C# code without Visual Studio? I've worked on pretty large software projects in Linux without having to pay for any development tool whatsoever.] Now obviously, if you're already a Windows shop, it doesn't pay to retrain all your developers. And if you're looking to develop a Windows desktop app, you just can't do that in Linux. But if you were starting a new web application project and could hire developers who are experts in whatever languages you want, would you still choose Windows as your development platform despite the high cost? And if yes, why?

    Read the article

  • Casting to a struct from LPVOID - C

    - by Jamie Keeling
    Hello, I am writing a simple console application which will allow me to create a number of threads from a set of parameters passed through the arguments I provide. DWORD WINAPI ThreadFunc(LPVOID threadData) { } I am packing them into a struct and passing them as a parameter into the CreateThread method and trying to unpack them by casting them to the same type as my struct from the LPVOID. I'm not sure how to cast it to the struct after getting it through so I can use it in the method itself, i've tried various combinations (Example attatched) but it won't compile. Struct: #define numThreads 1 struct Data { int threads; int delay; int messages; }; Call to method: HANDLE hThread; DWORD threadId; struct Data *tData; tData->threads = numThreads; tData->messages = 3; tData->delay = 1000; // Create child thread hThread = CreateThread( NULL, // lpThreadAttributes (default) 0, // dwStackSize (default) ThreadFunc, // lpStartAddress &tData, // lpParameter 0, // dwCreationFlags &threadId // lpThreadId (returned by function) ); My attempt: DWORD WINAPI ThreadFunc(LPVOID threadData) { struct Data tData = (struct Data)threadData; int msg; for(msg = 0; msg<5; msg++) { printf("Message %d from child\n", msg); } return 0; } Compiler error: error C2440: 'type cast' : cannot convert from 'LPVOID' to 'Data' As you can see I have implemented a way to loop through a number of messages already, I'm trying to make things slightly more advanced and add some further functionality.

    Read the article

  • How do I browse a Websphere MQ message without removing it?

    - by jmgant
    I'm writing a .NET Windows Forms application that will post a message to a Websphere MQ queue and then poll a different queue for a response. If a response is returned, the application will partially process the response in real time. But the response needs to stay in the queue so that a daily batch job, which also reads from the response queue, can do the rest of the processing. I've gotten as far as reading the message. What I haven't been able to figure out is how to read it without removing it. Here's what I've got so far. I'm an MQ newbie, so any suggestions will be appreciated. And feel free to respond in C#. Public Function GetMessage(ByVal msgID As String) As MQMessage Dim q = ConnectToResponseQueue() Dim msg As New MQMessage() Dim getOpts As New MQGetMessageOptions() Dim runThru = Now.AddMilliseconds(CInt(ConfigurationManager.AppSettings("responseTimeoutMS"))) System.Threading.Thread.Sleep(1000) 'Wait for one second before checking for the first response' While True Try q.Get(msg, getOpts) Return msg Catch ex As MQException When ex.Reason = MQC.MQRC_NO_MSG_AVAILABLE If Now > runThru Then Throw ex System.Threading.Thread.Sleep(3000) Finally q.Close() End Try End While Return Nothing 'Should never reach here' End Function NOTE: I haven't verified that my code actually removes the message. But that's how I understand MQ to work, and that appears to be what's happening. Please correct me if that's not the default behavior.

    Read the article

  • Issue in creating Zip file using glob.glob

    - by infosyssec
    Hi, I am creating a Zip file from a folder (and subfolders). it works fine and creates a new .zip file also but I am having an issue while using glob.glob. It is reading all files from the desired folder (source folder) and writing to the new zip file but the problem is that it is, however, adding subdirectories, but not adding files form the subdirectories. I am giving user an option to select the filename and path as well as filetype also (Zip or Tar). I don;t get any problem while creating .tar.gz file, but when use creates .zip file, this problem comes across. Here is my code: for name in (Source_Dir): for name in glob.glob("/path/to/source/dir/*" ): myZip.write(name, os.path.basename(name), zipfile.ZIP_DEFLATED) myZip.close() Also, if I use code below: for dirpath, dirnames, filenames in os.walk(Source_Dir): myZip.write(os.path.join(dirpath, filename) os.path.basename(filename)) myZip.close() Now the 2nd code taks all files even if it inside the folder/ subfolders, creates a new .zip file and write to it without any directory strucure. It even does not take dir structure for main folder and simply write all files from main dir or subdir to that .zip file. Can anyone please help me or suggest me. I would prefer glob.glob rather than the 2nd option to use. Thanks in advance. Regards, Akash

    Read the article

  • Python access an object byref / Need tagging

    - by Aaron C. de Bruyn
    I need to suck data from stdin and create a object. The incoming data is between 5 and 10 lines long. Each line has a process number and either an IP address or a hash. For example: pid=123 ip=192.168.0.1 - some data pid=123 hash=ABCDEF0123 - more data hash=ABCDEF123 - More data ip=192.168.0.1 - even more data I need to put this data into a class like: class MyData(): pid = None hash = None ip = None lines = [] I need to be able to look up the object by IP, HASH, or PID. The tough part is that there are multiple streams of data intermixed coming from stdin. (There could be hundreds or thousands of processes writing data at the same time.) I have regular expressions pulling out the PID, IP, and HASH that I need, but how can I access the object by any of those values? My thought was to do something like this: myarray = {} for each line in sys.stdin.readlines(): if pid and ip: #If we can get a PID out of the line myarray[pid] = MyData().pid = pid #Create a new MyData object, assign the PID, and stick it in myarray accessible by PID. myarray[pid].ip = ip #Add the IP address to the new object myarray[pid].lines.append(data) #Append the data myarray[ip] = myarray[pid] #Take the object by PID and create a key from the IP. <snip>do something similar for pid and hash, hash and ip, etc...</snip> This gives my an array with two keys (a PID and an IP) and they both point to the same object. But on the next iteration of the loop, if I find (for example) an IP and HASH and do: myarray[hash] = myarray[ip] The following is False: myarray[hash] == myarray[ip] Hopefully that was clear. I hate to admit that waaay back in the VB days, I remember being able handle objects byref instead of byval. Is there something similar in Python? Or am I just approaching this wrong?

    Read the article

  • Can I have two separate projects, 1 WebForms and 1 ASP.NET MVC, to both point to the same domain?

    - by Hamman359
    Is it possible to setup two separate projects, 1 WebForms and 1 ASP.NET MVC, to both point to the same domain? i.e. both point to different pages within www.somesite.com. Here's some background on the application and why I'm asking. This is a brownfield application that is currently 2.0 WebForms and is full of WebFormy 'goodness' (i.e. ObjectDataSources, FormView controls, UpdatePanels, etc...) There are lost of other 'fun' things in the code base like 600+ Stored Procedures and 200+ line methods in the business layer code that get data from the DB via stored proc, do some processing on the data, build an HTML string using string concatenation and then return that string to the UI layer. What we are planning on doing is developing new features in MVC and slowly converting the existing features over to MVC one at a time. As part of this transition, we will also be re-writing the layers below the UI to clean up the mess there and to do things like replace the stored procedures with NHibernate and introduce an IOC container. I know that you can run WebForms and MVC side-by-side in the same project, however, because we will be making wholesale changes to the way we do many things throughout our entire development stack, I'd like the new stuff to be a completely separate project within the solution. This should help serve as very visual reminder that this is a different way of doing things than before and make it easier to remove the old code as it is no longer needed. What I don't know is, is this even possible? Can two separate project point to the same domain? Here's an quick example of what I'm thinking: www.somesite.com/orders.aspx?id=123 (Orders page from existing WebForms project) www.somesite.com/customer/987 (Customer page from new MVC project)

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • Vim, LaTeX, word-wrapping, and version control

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • dotNet Templated, Repeating, Databound ServerControl: Modifying underlying ServerControl data per te

    - by Campbeln
    I have a server control that wraps an underlying class which manages a number of indexes to track where it is in a dataset (ie: RenderedRecordCount, ErroredRecordCount, NewRecordCount, etc.). I've got the server control rendering great, but OnDataBinding I'm having an issue as to seems to happen after CreateChildControls and before Render (both of which properly manage the iteration of the underlying indexes). While I'm somewhat familiar with the ASP.NET page lifecycle, this one seems to be beyond me at the moment. So... How do I hook into the iterative process OnDataBinding uses so I can manage the underlying indexes? Will I have to iterate over the ITemplates myself, managing the indexes as I go or is there an easier solution? [edit: Agh... writing the problem out is very cathartic... I'm thinking this is exactly what I will need to do...] Also... I implemented the iteration of the underlying indexes during CreateChildControls originally in the belief that was the proper place to hook in for events like OnDataBinding (thinking it was done as the controls were being .Add'd). Now it seems that this may actually be unnecessary. So I guess the secondary question is: What happens during CreateChildControls? Are the unadulterated (read: with various <%-tags in place) controls added to the .Controls collection without any other processing?

    Read the article

  • multithreading issue

    - by vbNewbie
    I have written a multithreaded crawler and the process is simply creating threads and having them access a list of urls to crawl. They then access the urls and parse the html content. All this seems to work fine. Now when I need to write to tables in a database is when I experience issues. I have 2 declared arraylists that will contain the content each thread parse. The first arraylist is simply the rss feed links and the other arraylist contains the different posts. I then use a for each loop to iterate one while sequentially incrementing the other and writing to the database. My problem is that each time a new thread accesses one of the lists the content is changed and this affects the iteration. I tried using nested loops but it did not work before and this works fine using a single thread.I hope this makes sense. Here is my code: SyncLock dlock For Each l As String In links finallinks.Add(l) Next End SyncLock SyncLock dlock For Each p As String In posts finalposts.Add(p) Next End SyncLock ... Dim i As Integer = 0 SyncLock dlock For Each rsslink As String In finallinks postlink = finalposts.Item(i) i = i + 1 finallinks and finalposts are the two arraylists. I did not include the rest of the code which shows the threads working but this is the essential part where my error occurs which is basically here postlink = finalposts.Item(i) i = i + 1 ERROR: index was out of range. Must be non-negative and less than the size of the collection Is there an alternative?

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • How to distinguish between two different UDP clients on the same IP address?

    - by Ricket
    I'm writing a UDP server, which is a first for me; I've only done a bit of TCP communications. And I'm having trouble figuring out exactly how to distinguish which user is which, since UDP deals only with packets rather than connections and I therefore cannot tell exactly who I'm communicating with. Here is pseudocode of my current server loop: DatagramPacket p; socket.receive(p); // now p contains the user's IP and port, and the data int key = getKey(p); if(key == 0) { // connection request key = makeKey(p); clients.add(key, p.ip); send(p.ip, p.port, key); // give the user his key } else { // user has a key // verify key belongs to that IP address // lookup the user's session data based on the key // react to the packet in the context of the session } When designing this, I kept in mind these points: Multiple users may exist on the same IP address, due to the presence of routers, therefore users must have a separate identification key. Packets can be spoofed, so the key should be checked against its original IP address and ignored if a different IP tries to use the key. The outbound port on the client side might change among packets. Is that third assumption correct, or can I simply assume that one user = one IP+port combination? Is this commonly done, or should I continue to create a special key like I am currently doing? I'm not completely clear on how TCP negotiates a connection so if you think I should model it off of TCP then please link me to a good tutorial or something on TCP's SYN/SYNACK/ACK mess. Also note, I do have a provision to resend a key, if an IP sends a 0 and that IP already has a pending key; I omitted it to keep the snippet simple. I understand that UDP is not guaranteed to arrive, and I plan to add reliability to the main packet handling code later as well.

    Read the article

  • Building a custom (dynamic) dataset and grid

    - by marko.ivanovski.nz
    Hi, I'm in the process of building a dynamic table in which you can add/remove rows & columns so it varies in size depending on what the user wants. Its purpose is to store properties for a product, but there can be from 1 to 10 different properties(columns) per product, and multiple instances(rows) of the product as well. Here's a screenshot of what I mean http://i40.tinypic.com/nbqkxc.jpg As you can see I need the structure to be completely up to the client which is where I'm getting stuck. I've started writing a custom DataSet that has "add column" & "add row" buttons, and have built a custom Table with Textboxes in each cell which builds from that dataset. I have no idea how to store the data on submit though, and to make it even more complex I need to store this in the database as a string which I think I can do by converting it to XML. Any help is appreciated, I think I just need a pointer in the right direction and am happy to do research from there. Thanks in advance. Marko

    Read the article

  • What is the best anti-crack scheme for your trial or subscription software?

    - by gmatt
    Writing code takes time and effort and just like any other human being we need to live by making an income (save for the few that are actually self sustainable.) Here are 3 general schemes to make a living: Independent developers can offer a trial then purchase scheme. An alternative is an open source base application with pay extensions. A last (probably least popular with customers) scheme is to enforce some kind of subscription. Then the price of the software pales in comparison to the long term subscription fees. So, my question would be a hypothetical one. Suppose that you invest thousands of hours into developing an application. Now suppose you can choose any one of the three options to make a living off this application--or any other option you want--and suppose you have a very real fear of loosing 80% of your revenue to a cracked version if one can be made. To be clear this application does not require the internet to perform all its useful functions, that is, your application is a prime candidate to be a cracked release on some website. Which option would you feel most comfortable with defending yourself against this possible situation and briefly describe why this option would be the best.

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • Factory.next not working in FactoryGirl 4.x and Rails 3.0. Anyone know the replacement?

    - by cchapman900
    I'm very new to Rails and am following along in the Ruby on Rails 3 Tutorial book by Michael Hartl and am running into a little bump while using the factory_girl gem. Specifically, I'm not sure how to update the code Factory.next(...) Before coming to this, I did run into a little problem between the older version of FactoryGirl used in the book and the current 4.1 version I'm using now, but was able to resolve it. Specifically, the old way of writing code as user = Factory(:user) needed to be updated to user = FactoryGirl.create(:user) That was fine, but now I'm coming to the code (as written in the book): spec/controllers/users_controler_spec.rb . @users << Factory(:user, :email => Factory.next(:email)) . which I've tried updating to . @users << FactoryGirl.create(:user, :email => FactoryGirl.next(:email)) . but get the error: Failure/Error: @users << FactoryGirl.create(:user, :email => FactoryGirl.next(:email)) NoMethodError: undefined method `next' for FactoryGirl:Module I've tried a few different variations but still can't quite get it. Is the problem I'm having with FactoryGirl and just not using the gem correctly or does it have something to do with the Ruby methods?

    Read the article

  • JavaScript: Achieving precise animation end values?

    - by bobthabuilda
    I'm currently trying to write my own JavaScript library. I'm in the middle of writing an animation callback, but I'm having trouble getting precise end values, especially when animation duration times are smaller. Right now, I'm only targeting positional animation (left, top, right, bottom). When my animations complete, they end up having an error margin of 5px~ on faster animations, and 0.5px~ on animations 1000+ ms or greater. Here's the bulk of the callback, with notes following. var current = parseFloat( this[0].style[prop] || 0 ) // If our target value is greater than the current , gt = !!( value > current ) , delta = ( Math.abs(current - value) / (duration / 13) ) * (gt ? 1 : -1) , elem = this[0] , anim = setInterval( function(){ elem.style[prop] = ( current + delta ) + 'px'; current = parseFloat( elem.style[prop] ); if ( gt && current >= value || !gt && current <= value ) clearInterval( anim ); }, 13 ); this[0] and elem both reference the target DOM element. prop references the property to animate, left, top, bottom, right, etc. current is the current value of the DOM element's property. value is the desired value to animate to. duration is the specified duration (in ms) that the animation should last. 13 is the setInterval delay (which should roughly be the absolute minimal for all browsers). gt is a var that is true if value exceeds the initial current, else it is false. How can I resolve the error margin?

    Read the article

  • Parse and read data frame in C?

    - by user253656
    I am writing a program that reads the data from the serial port on Linux. The data are sent by another device with the following frame format: |start | Command | Data | CRC | End | |0x02 | 0x41 | (0-127 octets) | | 0x03| ---------------------------------------------------- The Data field contains 127 octets as shown and octet 1,2 contains one type of data; octet 3,4 contains another data. I need to get these data I know how to write and read data to and from a serial port in Linux, but it is just to write and read a simple string (like "ABD") My issue is that I do not know how to parse the data frame formatted as above so that I can: get the data in octet 1,2 in the Data field get the data in octet 3,4 in the Data field get the value in CRC field to check the consistency of the data Here the sample snip code that read and write a simple string from and to a serial port in Linux: int writeport(int fd, char *chars) { int len = strlen(chars); chars[len] = 0x0d; // stick a <CR> after the command chars[len+1] = 0x00; // terminate the string properly int n = write(fd, chars, strlen(chars)); if (n < 0) { fputs("write failed!\n", stderr); return 0; } return 1; } int readport(int fd, char *result) { int iIn = read(fd, result, 254); result[iIn-1] = 0x00; if (iIn < 0) { if (errno == EAGAIN) { printf("SERIAL EAGAIN ERROR\n"); return 0; } else { printf("SERIAL read error %d %s\n", errno, strerror(errno)); return 0; } } return 1; } Does anyone please have some ideas? Thanks all.

    Read the article

  • Turbogears 2.0 with Python 2.6

    - by AKSK
    I've tried to install TurboGears 2.0 with Python 2.6 on both Windows 7 and Windows XP, but both give the same error: File "D:\PythonProjects\tg2env\Scripts\paster-script.py", line 8, in <module> load_entry_point('pastescript==1.7.3', 'console_scripts', 'paster')() File "D:\PythonProjects\tg2env\lib\site-packages\pastescript-1.7.3-py2.6.egg\paste\script\command.py", line 73, in run commands = get_commands() File "D:\PythonProjects\tg2env\lib\site-packages\pastescript-1.7.3-py2.6.egg\paste\script\command.py", line 115, in get_ plugins = pluginlib.resolve_plugins(plugins) File "D:\PythonProjects\tg2env\lib\site-packages\pastescript-1.7.3-py2.6.egg\paste\script\pluginlib.py", line 81, in res pkg_resources.require(plugin) File "D:\PythonProjects\tg2env\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py", line 626, in require File "D:\PythonProjects\tg2env\lib\site-packages\setuptools-0.6c9-py2.6.egg\pkg_resources.py", line 524, in resolve pkg_resources.DistributionNotFound: zope.sqlalchemy>=0.4: Not Found for: City_Guide (did you run python setup.py develop?) Now, according to the documentation on the main site, TurboGears 2.0 supports Python 2.6 in this page: TurboGears works with any version of python between 2.4 and 2.6. The most widely deployed version of python at the moment of this writing is version 2.5. Both python 2.4 and python 2.6 require additional steps which will be covered in the appropriate sections. But they never mention those steps in the documentation.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >