Search Results

Search found 12007 results on 481 pages for 'usb speed'.

Page 430/481 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • How to implement a counter when using golang's goroutine?

    - by MrROY
    I'm trying to make a queue struct that have push and pop functions. I need to use 10 threads push and another 10 threads pop data, just like i did in the code below. Questions : 1. I need to print out how much i have pushed/popped, but i don't know how to do that. 2. Is there anyway to speed up my code ? the code is too slow for me. package main import ( "runtime" "time" ) const ( DATA_SIZE_PER_THREAD = 10000000 ) type Queue struct { records string } func (self Queue) push(record chan interface{}) { // need push counter record <- time.Now() } func (self Queue) pop(record chan interface{}) { // need pop counter <- record } func main() { runtime.GOMAXPROCS(runtime.NumCPU()) //record chan record := make(chan interface{},1000000) //finish flag chan finish := make(chan bool) queue := new(Queue) for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.push(record) } finish<-true }() } for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.pop(record) } finish<-true }() } for i:=0; i<20; i++ { <-finish } }

    Read the article

  • LINQ compiled query DataBind issue

    - by Brian
    Hello All, I have a pretty extensive reporting page that uses LINQ. There is one main function that returns an IQueryable object. I then further filter / aggregate the returned query depending on the report the user needs. I changed this function to a compiled query, it worked great, and the speed increase was astonishing. The only problem comes when i want to databind the results. I am databinding to a standard asp.net GridView and it works fine, no problems. I am also databinding to an asp.net chart control, this is where my page is throwing an error. this works well: GridView gv = new GridView(); gv.DataSource = gridSource; But this does not: Series s1 = new Series("Series One"); s1.Points.DataBindXY(gridSource, "Month", gridSource, "Success"); The error i receive is this: System.NotSupportedException Specified method is not supported When i look into my gridSource var at run time i see this using a typical linq query: SELECT [t33].[value2] AS [Year], [t33].[value22] AS [Month], [t33].[value3] AS [Calls]...... I see this after i change the query to compiled: {System.Linq.OrderedEnumerable<<>f__AnonymousType15<string,int,int,int,int,int,int,int>,string>} This is obviously the reason why the databindxy is no longer working, but i am not sure how to get around it. Any help would be appreciated! Thanks

    Read the article

  • Vote on Pros and Cons of Java HTML to XML cleaners

    - by George Bailey
    I am looking to allow HTML emails (and other HTML uploads) without letting in scripts and stuff. I plan to have a white list of safe tags and attributes as well as a whitelist of CSS tags and value regexes (to prevent automatic return receipt). I asked a question: Parse a badly formatted XML document (like an HTML file) I found there are many many ways to do this. Some systems have built in sanitizers (which I don't care so much about). This page is a very nice listing page but I get kinda lost http://java-source.net/open-source/html-parsers It is very important that the parsers never throw an exception. There should always be best guess results to the parse/clean. It is also very important that the result is valid XML that can be traversed in Java. I posted some product information and said Community Wiki. Please post any other product suggestions you like and say Community Wiki so they can be voted on. Also any comments or wiki edits on what part of a certain product is better and what is not would be greatly appreciated. (for example,, speed vs accuracy..) It seems that we will go with either jsoup (seems more active and up to date) or TagSoup (compatible with JDK4 and been around awhile). A +1 for any of these products would be if they could convert all style sheets into inline style on the elements.

    Read the article

  • C# .Net Serial DataReceived Event response too slow for high-rate data.

    - by Matthew
    Hi, I have set up a SerialDataReceivedEventHandler, with a forms based program in VS2008 express. My serial port is set up as follows: 115200, 8N1 Dtr and Rts enabled ReceivedBytesThreshold = 1 I have a device I am interfacing with over a BlueTooth, USB to Serial. Hyper terminal receives the data just fine at any data rate. The data is sent regularly in 22 byte long packets. This device has an adjustable rate at which data is sent. At low data rates, 10-20Hz, the code below works great, no problems. However, when I increase the data rate past 25Hz, there starts to recieve mulitple packets on one call. What I mean by this is that there should be a event trigger for every incoming packet. With higher output rates, I have tested the buffer size (BytesToRead command) immediatly when the event is called and there are multiple packets in the buffer then. I think that the event fires slowly and by the time it reaches the code, more packes have hit the buffer. One test I do is see how many time the event is trigger per second. At 10Hz, I get 10 event triggers, awesome. At 100Hz, I get something like 40 event triggers, not good. My goal for data rate is 100HZ is acceptable, 200Hz preferred, and 300Hz optimum. This should work because even at 300Hz, that is only 52800bps, less than half of the set 115200 baud rate. Anything I am over looking? public Form1() { InitializeComponent(); serialPort1.DataReceived += new SerialDataReceivedEventHandler(serialPort1_DataReceived); } private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e) { this.Invoke(new EventHandler(Display_Results)); } private void Display_Results(object s, EventArgs e) { serialPort1.Read(IMU, 0, serial_Port1.BytesToRead); }

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • YUI: ensuring DOM elements and scripts are ready

    - by dound
    If I put my inline script after the DOM elements it interacts with, should I still use YUI 3's domready event? I haven't noticed any problems, and it seems like I can count on the browser loading the page sequentially. (I already use YUI().use('node', ... to make sure the YUI functions I need have been loaded since the YUI script is a separate file.) Is there a way to speed up the loading of widgets like YUI 2's calendar? I load the appropriate script in <head> element of my page. I use YUI().use('yui2-calendar', ... to make sure the Calendar widget is available. Unfortunately, this causes a short but noticeable delay when I load my page with the calendar. If I omit the YUI().use('yui2-calendar', ... code then it shows up without a noticeable delay - but I guess this could cause the Calendar to not show up at all if the YUI script doesn't load in time? With regards to #2, is it possible to reduce the visual artifact of the calendar not being present and then showing up? I've made it slightly better by specifying a height and width for the parent div so that at least the space is already allocated = minimal shifting around when it does load.

    Read the article

  • Why is WMDC/ActiveSync so flaky?

    - by Ira Rainey
    I'm developing a Windows Mobile app using the .NET Compact Framework 3.5 and VS2008, and for debugging using the Device Emulator V3, on Win7, and seem to have constant problems with Windows Mobile Device Centre (6.1) connecting. Using the Emulator Manager (9.0.21022.8) I cradle the device using DMA in WMDC. The problem is it's so flaky at actually connecting that it's becoming a pain. I find that when I turn my computer on, before I can get it to connect I have to open up WMDC, disable Connect over DMA, close WMDC down, reopen it again, and then it might cradle. Often I have to do this twice before it will cradle. Once it's cradled it's generally fine, but nothing seems consistent in getting it to connect. Connecting with physical devices is often better, although not always. If I plug a PDA into a USB socket other than the one it was originally plugged into then it won't connect at all. Often the best/most reliable connection method seems to be over Bluetooth, but that's quite slow. Anybody got any tips or advice?

    Read the article

  • WCF Double Hop questions about Security and Binding.

    - by Ken Maglio
    Background information: .Net Website which calls a service (aka external service) facade on an app server in the DMZ. This external service then calls the internal service which is on our internal app server. From there that internal service calls a stored procedure (Linq to SQL Classes), and passes the serialized data back though to the external service, and from there back to the website. We've done this so any communication goes through an external layer (our external app server) and allows interoperability; we access our data just like our clients consuming our services. We've gotten to the point in our development where we have completed the system and it all works, the double hop acts as it should. However now we are working on securing the entire process. We are looking at using TransportWithMessageCredentials. We want to have WS2007HttpBinding for the external for interoperability, but then netTCPBinding for the bridge through the firewall for security and speed. Questions: If we choose WS2007HttpBinding as the external services binding, and netTCPBinding for the internal service is this possible? I know WS-* supports this as does netTCP, however do they play nice when passing credential information like user/pass? If we go to Kerberos, will this impact anything? We may want to do impersonation in the future. If you can when you answer post any reference links about why you're answering the way you are, that would be very helpful to us. Thanks!

    Read the article

  • JSON: Jackson stream parser - is it really worth it?

    - by synic
    I'm making pretty heavy use of JSON parsing in an app I'm writing. Most of what I have done is already implemented using Android's built in JSONObject library (is it json-lib?). JSONObject appears to create instances of absolutely everything in the JSON string... even if I don't end up using all of them. My app currently runs pretty well, even on a G1. My question is this: are the speed and memory benefits from using a stream parser like Jackson worth all the trouble? By trouble, I mean this: As far as I can tell, there are three downsides to using Jackson instead of the built in library: Dependency on an external library. This makes your .apk bigger in the end. Not a huge deal. Your app is more fragile. Since the parsing is not done automatically, it is more vulnerable to changes in the JSON text that it's parsing. I'm extremely worried that malformed JSON will result in infinite loops (as pull parsing requires a lot of while loops). Writing code to parse JSON via a stream parser is ugly and tedious.

    Read the article

  • Unhandled Exception with c++ app on Visual Studio 2008 release build - occurs when returning from fu

    - by Rich
    Hi, I have a (rather large) application that I have written in C++ and until recently it has been running fine outside of visual studio from the release build. However, now, whenever I run it it says "Unhandled exception at 0x77cf205b in myprog.exe: 0xC0000005: Access violation writing location 0x45000200.", and leads me to "crtexe.c" at line 582 ("mainret = main(argc, argv, envp);") if I attempt to debug it. Note that this problem never shows if I run my debug executable outside of visual studio, or if I run my debug or release build within visual studio. It only happens when running the release build outside of visual studio. I have been through and put plenty of printfs and a couple of while(1)s in it to see when it actually crashed, and found that the access violation occurs at exactly the point that the value is returned from the function (I'm returning a pointer to an object). I don't fully understand why I would get an access violation at the point it returns, and it doesn't seem to matter what I'm returning as it still occurs when I return 0. The point it started crashing was when I added a function which does a lot of reading from a file using ifstream. I am opening the stream every time I attempt to read a new file and close it when I finish reading it. If I keep attempting to run it, it will run once in about 20 tries. It seems a lot more reliable if I run it off my pen drive (it seems to crash the first 3 or 4 times then run fine after that - maybe it's due to its slower read speed). Thanks for your help, and if I've missed anything let me know.

    Read the article

  • What strategy do you use to sync your code when working from home

    - by Ben Daniel
    At my work I currently have my development environment inside a Virtual Machine. When I need to do work from home I copy my VM and any databases I need onto a laptop drive sized external USB drive. After about 10 minutes of copying I put the drive in my pocket and head home, copy back the VM and databases onto my personal computer and I'm ready to work. I follow the same steps to take the work back with me. So if I count the total amount of time I spend waiting around for files to finish copying in order for me to take work home and bring it back again, it comes to around 40 minutes! I do have a VPN connection to my work from home (providing the internet is up at both sites) and a decent internet speed (8mbits down/?up) but I find Remote Desktoping into my work machine laggy enough for me to want to work on my VM directly. So in looking at what other options I have or how I could improve my existing option I'm interested in what strategy you use or recommend to do work at home and keeping your code/environment in sync. EDIT: I'd prefer an option where I don't have to commit my changes into version control before I leave work - as I like to make meaningful descriptive comments in my commits, committing would take longer than just copying my VM onto a portable drive! lol Also I'd prefer a solution where my dev environment stays in sync too. Having said that I'm still very interested in your own solutions even if they don't exactly solve my problem as best as I'd like. :)

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • Looking for Hardware that will easily interface with my .NET code.

    - by SkippyFire
    I'm a .NET C# developer looking to do some hardware interfacing/programming. I just want something super simple to mess around with. I have done one of those basic stamp projects, but I want something with less electrical work. A self-contained piece of hardware would be fine. I'm not really looking to do embedded programming... but that would actually be pretty cool if something was capable of running .net code. I'm looking for something that would be easy to connect, hopefully via USB. Serial ports seems to be more hit or miss nowadays with laptops and netbooks. Something I can easily send data to, like a mini LCD, or series of LED's. Better yet would be something that provides feedback, like a temperature sensor. The best would be something more featured that I could talk to. I would be able to send data to it, and it would send back responses. Maybe something like a servo that could report it's position? Or maybe something that I could set parameters on? Any ideas? Thanks in advance!

    Read the article

  • How to temporarily replace one primitive type with another when compiling to different targets in c#

    - by Keith
    How to easily/quickly replace float's for doubles (for example) for compiling to two different targets using these two particular choices of primitive types? Discussion: I have a large amount of c# code under development that I need to compile to alternatively use float, double or decimals depending on the use case of the target assembly. Using something like “class MYNumber : Double” so that it is only necessary to change one line of code does not work as Double is sealed, and obviously there is no #define in C#. Peppering the code with #if #else statements is also not an option, there is just too much supporting Math operators/related code using these particular primitive types. I am at a loss on how to do this apparently simple task, thanks! Edit: Just a quick comment in relation to boxing mentioned in Kyles reply: Unfortunately I need to avoid boxing, mainly since float's are being chosen when maximum speed is required, and decimals when maximum accuracy is the priority (and taking the 20x+ performance hit is acceptable). Boxing would probably rules out decimals as a valid choice and defeat the purpose somewhat. Edit2: For reference, those suggesting generics as a possible answer to this question note that there are many issues which count generics out (at least for our needs). For an overview and further references see Using generics for calculations

    Read the article

  • Search for string allowing for one mismatches in any location of the string, Python

    - by Vincent
    I am working with DNA sequences of length 25 (see examples below). I have a list of 230,000 and need to look for each sequence in the entire genome (toxoplasma gondii parasite) I am not sure how large the genome is but much more that 230,000 sequences. I need to look for each of my sequences of 25 characters example(AGCCTCCCATGATTGAACAGATCAT). The genome is formatted as a continuous string ie (CATGGGAGGCTTGCGGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTTGCGGAGTGCGGAGCCTGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTT.........) I don't care where or how many times it is found, just yes or no. This is simple I think, str.find(AGCCTCCCATGATTGAACAGATCAT) But I also what to find a close match defined as wrong(mismatched) at any location but only 1 location and record the location in the sequnce. I am not sure how do do this. The only thing I can think of is using a wildcard and performing the search with a wildcard in each position. ie search 25 times. For example AGCCTCCCATGATTGAACAGATCAT AGCCTCCCATGATAGAACAGATCAT close match with a miss-match at position 13 Speed is not a big issue I am only doing it 3 times. i hope but it would be nice it was fast. The are programs that do this find matches and partial matches but I am looking for a type of partial match that is not available with these applications. Here is a similar post for pearl but they are only comparing sequnces not searching a continuous string Related post

    Read the article

  • Keyboard hook return different symbols from card reader depends whther my app in focus or not

    - by user363868
    I code WinForm application where one of the input is magnetic stripe card reader (CR). I am using code George Mamaladze's article Processing Global Mouse and Keyboard Hooks in C# on codeproject.com to listen keyboard (USB card reader acts same way as keyboard) and I have weird situation. One card reader CR1 (Unitech MS240-2UG) produces keystroke which I intercept on KeyPress event analyze that I intercept certain patter like %ABCD-6EFJHI? and trigger some logic. Analysis required because user can type something else into application or in another application meanwhile my app is open When I use another card reader CR2 (IdTech IDBM-334133) keystroke intercepted by hook started from number 5 instead of % (It is actually same key on keyboard). Since it is starting sentinel it is very important for me to have ability recognize input from card reader. Moreover if my app running in background and I have focus on Notepad when I swipe card string %ABCD-6EFJHI? appears in Notepad and same way, with proper starting character) intercepted by keyboard hook. If swiped when focus on Form it is 5ABCD-6EFJHI? User who tried app with another card reader has same result as me with CR2. Only CR1 works for me as expected I was looking into Device manager of Windows and both devices use same HID driver supplied by MS. I checked devices though respective software from CR makers and starting and ending sentinels set to % and ? respective on both. I would appreciate and ideas and thoughts as I hit the wall myself Thank you

    Read the article

  • count on LINQ union

    - by brechtvhb
    I'm having this link statement: List<UserGroup> domains = UserRepository.Instance.UserIsAdminOf(currentUser.User_ID); query = (from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentFrom equals uug.User_ID where domains.Contains(uug.UserGroup) select doc) .Union(from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentTo equals uug.User_ID where domains.Contains(uug.UserGroup) select doc); Running this statement doesn't cause any problems. But when I want to count the resultset the query suddenly runs quite slow. totalRecords = query.Count(); The result of this query is : SELECT COUNT([t5].[DocumentID]) FROM ( SELECT [t4].[DocumentID], [t4].[DocumentFrom], [t4].[DocumentTo] FROM ( SELECT [t0].[DocumentID], [t0].[DocumentFrom], [t0].[DocumentTo FROM [dbo].[Document] AS [t0] INNER JOIN [dbo].[User_UserGroup] AS [t1] ON [t0].[DocumentFrom] = [t1].[User_ID] WHERE ([t1].[UserGroupID] = 2) OR ([t1].[UserGroupID] = 3) OR ([t1].[UserGroupID] = 6) UNION SELECT [t2].[DocumentID], [t2].[DocumentFrom], [t2].[DocumentTo] FROM [dbo].[Document] AS [t2] INNER JOIN [dbo].[User_UserGroup] AS [t3] ON [t2].[DocumentTo] = [t3].[User_ID] WHERE ([t3].[UserGroupID] = 2) OR ([t3].[UserGroupID] = 3) OR ([t3].[UserGroupID] = 6) ) AS [t4] ) AS [t5] Can anyone help me to improve the speed of the count query? Thanks in advance!

    Read the article

  • What is more viable to use? Javascript libraries or UI Programming tools?

    - by Haresh Karkar
    What is more viable to use:- Javascript Libraries: YUI, jQuery, ExtJs OR UI Programming tools: GWT, ExtGWT, SmartGWT It has become very difficult to choose between them as they are constantly increasing their capabilities to meet newer requirements. We all know the power of jQuery in UI manipulations. The latest news from Microsoft about jQuery being officially part of .Net developr’s toolkit will definitely make jQuery a preferred choice against other JavaScript libraries [See link: http://weblogs.asp.net/scottgu/archive/2008/09/28/jquery-and-microsoft.aspx]. But on the other hand, GWT is building a framework which could be used on client as well as on the sever side. This is definitely going to make developers’ life easy as it does not require developer to be an expert in browser quirks, XMLHttpRequest, and JavaScript in order to develop high-performance web applications. It includes SDK (Java API libraries, compiler, and development server which allows to write client-side applications in Java and deploy them as JavaScript), Speed Tracer and plug-in for Eclipse. GWT is used by many products like Google Wave and AdWords. So question is still un-answered, what is more viable to use? Any thoughts?

    Read the article

  • Can a GeneralPath be modified?

    - by Dov
    java2d is fairly expressive, but requires constructing lots of objects. In contrast, the older API would let you call methods to draw various shapes, but lacks all the new features like transparency, stroke, etc. Java has fairly high costs associated with object creation. For speed, I would like to create a GeneralPath whose structure does not change, but go in and change the x,y points inside. path = new GeneralPath(GeneralPath.WIND_EVEN_ODD, 10); path.moveTo(x,y); path.lineTo(x2, y2); double len = Math.sqrt((x2-x)*(x2-x) + (y2-y)*(y2-y)); double dx = (x-x2) * headLen / len; double dy = (y-y2) * headLen / len; double dx2 = -dy * (headWidth/headLen); double dy2 = dx * (headWidth/headLen); path.lineTo(x2 + dx + dx2, y2 + dy + dy2); path.moveTo(x2 + dx - dx2, y2 + dy - dy2); path.lineTo(x2,y2); This one isn't even that long. Imagine a much longer sequence of commands, and only the ones on the end are changing. I just want to be able to overwrite commands, to have an iterator effectively. Does that exist?

    Read the article

  • Banning by IP with php/mysql

    - by incrediman
    I want to be able to ban users by IP. My idea is to keep a list of IP's as rows in an BannedIPs table (the IP column would be an index). To check users' IP's against the table, I will keep a session variable called $_SESSION['IP'] for each session. If on any request, $_SESSION['IP'] doesn't match $_SERVER['REMOTE_ADDR'], I will update $_SESSION['IP'] and check the BannedIPs table to see if the IP is banned. (A flag will also be saved as a session variable specifying whether or not the user is banned) Here are the things I'm wondering: Does that sound like a good strategy with regards to speed and security (would someone be able to get around the IP ban somehow, other than changing IP's)? What's the best way to structure a mysql query that checks to see if a row exists? That is, what's the best way to query the db to see if a row with a certain IP exists (to check if it's banned)? Should I save the IP's as integers or strings? Note that... I estimate there will be between 1,000-10,000 banned IP's stored in the database. $_SERVER['REMOTE_ADDR'] is the IP from which the current request was sent.

    Read the article

  • Refactoring ADO.NET - SqlTransaction vs. TransactionScope

    - by marc_s
    I have "inherited" a little C# method that creates an ADO.NET SqlCommand object and loops over a list of items to be saved to the database (SQL Server 2005). Right now, the traditional SqlConnection/SqlCommand approach is used, and to make sure everything works, the two steps (delete old entries, then insert new ones) are wrapped into an ADO.NET SqlTransaction. using (SqlConnection _con = new SqlConnection(_connectionString)) { using (SqlTransaction _tran = _con.BeginTransaction()) { try { SqlCommand _deleteOld = new SqlCommand(......., _con); _deleteOld.Transaction = _tran; _deleteOld.Parameters.AddWithValue("@ID", 5); _con.Open(); _deleteOld.ExecuteNonQuery(); SqlCommand _insertCmd = new SqlCommand(......, _con); _insertCmd.Transaction = _tran; // add parameters to _insertCmd foreach (Item item in listOfItem) { _insertCmd.ExecuteNonQuery(); } _tran.Commit(); _con.Close(); } catch (Exception ex) { // log exception _tran.Rollback(); throw; } } } Now, I've been reading a lot about the .NET TransactionScope class lately, and I was wondering, what's the preferred approach here? Would I gain anything (readibility, speed, reliability) by switching to using using (TransactionScope _scope = new TransactionScope()) { using (SqlConnection _con = new SqlConnection(_connectionString)) { .... } _scope.Complete(); } What you would prefer, and why? Marc

    Read the article

  • JQuery Cycle fails on Page Refresh

    - by Darknight
    In a similar issue as this one: http://stackoverflow.com/questions/1719475/jquery-cycle-firefox-squishing-images I've managed to overcome the initial problem using Jeffs answer in the above link. However now I have noticed a new bug, upon page refresh it simply does not work. I have tried a hard refresh (ctrl+F5) but this does not work. However when you come page to the page it loads fine. here is my modified version (taken from Jeff's): <script type="text/javascript"> $(document).ready(function() { var imagesRemaining = $('#slideshow img').length; $('#slideshow img').bind('load', function(e) { imagesRemaining = imagesRemaining - 1; if (imagesRemaining == 0) { $('#slideshow').show(); $('#slideshow').cycle({ fx: 'shuffle', speed: 1200 }); } }); }); </script> Any ideas? I've also tried JQuery Live but could not implement it correctly. I've also tried Meta tags to force images to load. But it only works first time round.

    Read the article

  • WF performance with new 20,000 persisted workflow instances each month

    - by Nikola Stjelja
    Windows Workflow Foundation has a problem that is slow when doing WF instances persistace. I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish. What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons. I have the following questions: Is this true? Will my performance be crap with that load(given WF persitance speed limitations) How can I solve the problem? We currently have two possible solutions: 1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database. 2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly). I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at [email protected]

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >