Search Results

Search found 15777 results on 632 pages for 'desktop computers'.

Page 553/632 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • Turing Machine & Modern Computer

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I'd like to share my understanding and hear your comments. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine. The above is my naive understanding about the working paradigm of modern computer. I couln't wait to hear your comments. Thanks very much.

    Read the article

  • Parsing basic math equations for children's educational software?

    - by Simucal
    Inspired by a recent TED talk, I want to write a small piece of educational software. The researcher created little miniature computers in the shape of blocks called "Siftables". [David Merril, inventor - with Siftables in the background.] There were many applications he used the blocks in but my favorite was when each block was a number or basic operation symbol. You could then re-arrange the blocks of numbers or operation symbols in a line, and it would display an answer on another siftable block. So, I've decided I wanted to implemented a software version of "Math Siftables" on a limited scale as my final project for a CS course I'm taking. What is the generally accepted way for parsing and interpreting a string of math expressions, and if they are valid, perform the operation? Is this a case where I should implement a full parser/lexer? I would imagine interpreting basic math expressions would be a semi-common problem in computer science so I'm looking for the right way to approach this. For example, if my Math Siftable blocks where arranged like: [1] [+] [2] This would be a valid sequence and I would perform the necessary operation to arrive at "3". However, if the child were to drag several operation blocks together such as: [2] [\] [\] [5] It would obviously be invalid. Ultimately, I want to be able to parse and interpret any number of chains of operations with the blocks that the user can drag together. Can anyone explain to me or point me to resources for parsing basic math expressions? I'd prefer as much of a language agnostic answer as possible.

    Read the article

  • algorithm || method to write prog

    - by fatai
    I am one of the computer science student. My wonder is everyone solve problem with different or same method, but actually I dont know whether they use method or I dont know whether there are such common method to approach problem. All teacher give us problem which in simple form sometimes, but they dont introduce any approach or method(s) so that we can first choose method then apply that one to problem , afterward find solution then write code. I have found one method when I failed the course, More accurately, When I counter problem in language , I will get more paper and then ; first, input/ output step ; my prog will take this / these there argument(s) and return namely X , ex : in c, input length is not known and at same type , so I must use pointer desired output is in form of package , so use structure second, execution part ; in that step , I am writing all step which are goes to final output ex : in python ; 1.) [ + , [- , 4 , [ * , 1 , 2 ]], 5] 2.) [ + , [- , 4 , 2 ],5 ] 3.) [ + , 2 , 5] 4.) 7 ==> return 7 third, I will write test code ex : in c++ input : append 3 4 5 6 vector_x remove 0 1 desired output vector_x holds : 5 6 But now, I wonder other method ; used to construct class :::: for c++ , python, java used to communicate classes / computers used for solving embedded system problem ::::: for c Why I wonder , because I learn if you dont costruct algorithm on paper, you may achieve your goal. Like no money no lunch , I can say no algorithm no prog therefore , feel free when you write your own method , a way which is introduced by someone else but you are using and you find it is so efficient

    Read the article

  • Modify shell script to monitor/ping multiple ip addresses

    - by Alex
    Alright so I need to constantly monitor multiple routers and computers, to make sure they remain online. I have found a great script here that will notify me via growl(so i can get instant notifications on my phone) if a single ip cannot be pinged. I have been attempting to modify the script to ping multiple addresses, with little luck. I'm having trouble trying to figure out how to ping a down server while the script keeps watching the online servers. any help would be greatly appreciated. I haven't done much shell scripting so this is quite new to me. Thanks #!/bin/sh #Growl my Router alive! #2010 by zionthelion73 [at] gmail . com #use it for free #redistribute or modify but keep these comments #not for commercial purposes iconpath="/path/to/router/icon/file/internet.png" # path must be absolute or in "./path" form but relative to growlnotify position # document icon is used, not document content # Put the IP address of your router here localip=192.168.1.1 clear echo 'Router avaiability notification with Growl' #variable avaiable=false com="################" #comment prefix for logging porpouse while true; do if $avaiable then echo "$com 1) $localip avaiable $com" echo "1" while ping -c 1 -t 2 $localip do sleep 5 done growlnotify -s -I $iconpath -m "$localip is offline" avaiable=false else echo "$com 2) $localip not avaiable $com" #try to ping the router untill it come back and notify it while !(ping -c 1 -t 2 $localip) do echo "$com trying.... $com" sleep 5 done echo "$com found $localip $com" growlnotify -s -I $iconpath -m "$localip is online" avaiable=true fi sleep 5 done

    Read the article

  • Templates, and C++ operator for logic: B contained by set A

    - by James Morris
    In C++, I'm looking to implement an operator for selecting items in a list (of type B) based upon B being contained entirely within A. In the book "the logical design of digital computers" by Montgomery Phister jr (published 1958), p54, it says: F11 = A + ~B has two interesting and useful associations, neither of them having much to do with computer design. The first is the logical notation of implication... The second is notation of inclusion... This may be expressed by a familiar looking relation, B < A; or by the statement "B is included in A"; or by the boolean equation F11= A + ~B = 1. My initial implementation was in C. Callbacks were given to the list to use for such operations. An example being a list of ints, and a struct containting two ints, min and max, for selection purposes. There, selection would be based upon B = A-min && B <= A-max. Using C++ and templates, how would you approach this after having implemented a generic list in C using void pointers and callbacks? Is using < as an over-ridden operator for such purposes... <ugh> evil? </ugh> (or by using a class B for the selection criteria, implementing the comparison by overloading ?)

    Read the article

  • VS2010 express beta2 - no add reference dialog, no open file/project dialogs

    - by David
    Just installed VS2010 express for Windows Phone last night. Install went smoothly. It creates a project, compiles, and deploys the app to the emulator. Here's the problem: When I try to "Add Reference" through the Project menu, I do not get the Add Reference dialog box. Same thing if I right click References in the solution explorer and click Add Reference. That's not all. "File...Open" and "File...Open Project" also fail to throw up an open file dialog box. When attempting any of these actions, the IDE quickly loses and regains focus. Even pressing a keyboard shortcut (Ctrl+O) causes the IDE to quickly lose and regain focus, but no open file dialog box appears. This is what I have tried, not particularly in this order: 1. Turned off UAC 2. Monitored file and registry access using Process Monitor during a File...Open operation. File activity showed mostly "SUCCESS" with a few "FAST IO DISALLOWED" and a few "INVALID DEVICE REQUEST" results. Registry activity showed mostly "SUCCESS" with some "NAME NOT FOUND" and a few "BUFFER OVERFLOW" results. 3. Created a new, clean Windows account to run the IDE from 4. Forced a test project to add a reference to "System.Xml.Linq" by editing the ".csproj" project file. Project failed to load in the IDE. I don't have these problems at all on 2 other Windows 7 computers with VS2010 C# express beta 2 installed. One machine is 32bit and the other 64bit, both Home Premium edition. My system: Windows 7 Home Premium, 64bit Other Visual Studio products installed: VS2008 C# express, VS2008 C++ express One other thing to note: Several months ago I installed the non-phone distribution of VS2010 C# express beta 2, and I had the same exact problems. Back then I chalked it up to being beta and went back to VS2008 C# express, where I do not have these issues.

    Read the article

  • Open XML document ContentControls problem with signed id's

    - by willvv
    I have an application that generates Open XML documents with Content Controls. To create a new Content Control I use Interop and the method ContentControls.Add. This method returns an instance of the added Content Control. I have some logic that saves the id of the Content Control to reference it later, but in some computers I've been having a weird problem. When I access the ID property of the Content Control I just created, it returns a string with the numeric id, the problem is that when this value is too big, after I save the document, if I look through the document.xml in the generated document, the <w:id/> element of the <w:sdtPr/> element has a negative value, that is the signed equivalent of the value I got from the Id property of the generated control. For example: var contentControl = ContentControls.Add(...); var contentControlId = contentControl.ID; // the value of contentControlId is "3440157266" If I save the document and open it in the Package Explorer, the Id of the Content Control is "-854810030" instead of "3440157266". What have I figured out is this: ((int)uint.Parse("3440157266")).ToString() returns "-854810030" Any idea of why this happens? This issue is hard to replicate because I don't control the Id of the generated controls, the Id is automatically generated by the Interop libraries.

    Read the article

  • Bi-directional view model syncing with "live" collections and properties (MVVM)

    - by Schneider
    I am getting my knickers in a twist recently about View Models (VM). Just like this guy I have come to the conclusion that the collections I need to expose on my VM typically contain a different type to the collections exposed on my business objects. Hence there must be a bi-directional mapping or transformation between these two types. (Just to complicate things, on my project this data is "Live" such that as soon as you change a property it gets transmitted to other computers) I can just about cope with that concept, using a framework like Truss, although I suspect there will be a nasty surprise somewhere within. Not only must objects be transformed but a synchronization between these two collections is required. (Just to complicate things I can think of cases where the VM collection might be a subset or union of business object collections, not simply a 1:1 synchronization). I can see how to do a one-way "live" sync, using a replicating ObservableCollection or something like CLINQ. The problem then becomes: What is the best way to create/delete items? Bi-directinal sync does not seem to be on the cards - I have found no such examples, and the only class that supports anything remotely like that is the ListCollectionView. Would bi-directional sync even be a sensible way to add back into the business object collection? All the samples I have seen never seem to tackle anything this "complex". So my question is: How do you solve this? Is there some technique to update the model collections from the VM? What is the best general approach to this?

    Read the article

  • What kinds of problems are most likely to occur? (question rewritten)

    - by ChrisC
    If I wrote 1) a C# SQL db app (a simple program consisting of a gui over some forms with logic for interfacing with the sql db) 2) for home use, that doesn't do any network communication 3) that uses a simple, reliable, and appropriate sql db 4) whose gui is properly separated from the logic 5) that has complete and dependable input data validation 6) that has been completely tested so that 100% of logic bugs were eliminated ... and then if the program was installed and run by random users on their random Windows computers Q1) What types of technical (non-procedural) problems and support situations are most likely to occur, and how likely are they? Q2) Are there more/other things I could do in the first place to prevent those problems and also minimize the amount of user support required? I know some answers will apply to my specific platforms (C#, SQL, Windows, etc) and some won't. Please be as specific as is possible. Mitch Wheat gave me some very valuable advice below, but I'm now offering the bounty because I am hoping to get a better picture of the kinds of things that I'm most reasonably likely to encounter. Thanks.

    Read the article

  • How does jquery display an image received from an ajax request?

    - by Gnee
    I have this working great, but I'd like a deeper understanding of what is actually going on behind the scenes. I am using Jquery's Ajax method to pull 5 blog posts (returning only the title and first photo). A PHP script grabs the blog posts' title and first photo and sticks it in an array and sends it back to my browser as JSON. Upon receiving the JSON object, Jquery grabs the first member of the JSON object and displays it's title and photo. In a gallery I made, using buttons – the user can iterate the 1-5 posts. So the actual AJAX call happens right away, and only once. I am basically using this kind of setup: $('my_div').html(json_obj[i]) and each click does a i++. So jquery is plucking these blog posts from my computers memory, my web browsers cache, or some kind of cache in the Javascript engine? One of the things it's returning is a pretty gnarly animated gif. I just wonder if it constantly running in the background (but not visible), stealing processing cycles...etc. Or Javascript just inserting (say a flash movie) into the DOM, but before hand does nothing but take up a little memory (no processing). Anyway, I'm just curious. If someone is a guru on this, I'd love to hear your take. Thanks!!

    Read the article

  • int.Parse of "8" fails. int.Parse always requires CultureInfo.InvariantCulture?

    - by Henrik Carlsson
    We develop an established software which works fine on all known computers except one. The problem is to parse strings that begin with "8". It seems like "8" in the beginning of a string is a reserved character. Parsing: int.Parse("8") -> Exception message: Input string was not in a correct format. int.Parse("80") -> 0 int.Parse("88") -> 8 int.Parse("8100") -> 100 CurrentCulture: sv-SE CurrentUICulture: en-US The problem is solved using int.Parse("8", CultureInfo.InvariantCulture). However, it would be nice to know the source of the problem. Question: Why do we get this behaviour of "8" if we don't specify invariant culture? Additional information: I did send a small program to my client achieve the result above: private int ParseInt(string s) { int parsedInt = -1000; try { parsedInt = int.Parse(s); textBoxMessage.Text = "Success: " + parsedInt; } catch (Exception ex) { textBoxMessage.Text = string.Format("Error parsing string: '{0}'", s) + Environment.NewLine + "Exception message: " + ex.Message; } textBoxMessage.Text += Environment.NewLine + Environment.NewLine + "CurrentCulture: " + Thread.CurrentThread.CurrentCulture.Name + "\r\n" + "CurrentUICulture: " + Thread.CurrentThread.CurrentUICulture.Name + "\r\n"; return parsedInt; }

    Read the article

  • Ignoring build number when referencing dll

    - by brickner
    I have one solution with a .NET 4.0 project (C#) that produces a delayed signed dll, that I dotfuscate and sign. EDIT: This is how I version the dll: [assembly: AssemblyVersion("0.7.0.*")] [assembly: AssemblyFileVersion("0.7.0.0")] I have another solution with a .NET 4.0 project (C++/CLI) that references the signed dll and produces a signed dll (actually, delayed signed and signed in a post build because of a flaw in the C++ build system). The problem is that the reference to the dll contains a specific version number, which includes even the build number (I want to have a build number). Every time I build the referenced dll, I have to change the project settings file (.vcxproj) so it reference the new version dll. Since I work with source control, this is very inconvenient (different computers might have different build numbers since each computer build its own referenced dll - the referenced dll is not in the source control). If I don't change the reference, I get a warning: warning MSB3245: Could not resolve this reference. Could not locate the assembly... And many errors like this: error C3083: 'Foo': the symbol to the left of a '::' must be a type These are resolved once I change the reference. How do I make the reference ignore the build number or even the entire version number?

    Read the article

  • Help Me With This Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obatin the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • A little confusion about AJAX and inserting into DOM..

    - by Gnee
    I have this working great, but I'd like a deeper understanding of what is actually going on behind the scenes. I am using Jquery's Ajax method to pull 5 blog posts (returning only the title and first photo). A PHP script grabs the blog posts' title and first photo and sticks it in an array and sends it back to my browser as JSON. Upon receiving the JSON object, Jquery grabs the first member of the JSON object and displays it's title and photo. In a gallery I made, using buttons – the user can iterate the 1-5 posts. So the actual AJAX call happens right away, and only once. I am basically using this kind of setup: $('my_div').html(json_obj[i]) and each click does a i++. So jquery is plucking these blog posts from my computers memory, my web browsers cache, or some kind of cache in the Javascript engine? One of the things it's returning is a pretty gnarly animated gif. I just wonder if it constantly running in the background (but not visible), stealing processing cycles...etc. Or Javascript just inserting (say a flash movie) into the DOM, but before hand does nothing but take up a little memory (no processing). Anyway, I'm just curious. If someone is a guru on this, I'd love to hear your take. THanks!!

    Read the article

  • How to deploy ClickOnce .Net 3.5 application on 3.0 machine

    - by Buthrakaur
    I have .Net 3.5 SP1 WPF application which I'm successfully deploying to client computers using ClickOnce. Now I got new requirement - one of our clients need to run the application on machines equipped just with .Net 3.0 and it's entirely impossible to upgrade or install anything on the machines. I already tried to run the 3.5 application with some of the 3.5FW DLLs copied to the application directory and it worked without any problems. The only problem at the moment is ClickOnce. I already made it to include the 3.5FW System.*.dll files in list of application files, but it always aborts installation on 3.0 machine with this error message: Unable to install or run the application. The application requires that assembly System.Core Version 3.5.0.0 be installed in the Global Assembly Cache (GAC) first. Please contact your system administrator. I already tried to tweak prerequisites on Publish tab of my project, but no combination solved the issue. What part of ClickOnce is responsible for checking prerequisites? I already tried to deploy using mageui.exe, but the 3.5FW error is still present. What should I do to fore ClickOnce to stop checking any prerequisites at all? The project is created using VS2010.

    Read the article

  • Making GUI applications on Linux/Windows. What languages/tools to use?

    - by Javed Ahamed
    My student group and I are trying to continue working on a project we worked on this semester over the summer to become a professional, deployable app. We originally did it in Adobe AIR but it seems now that the computers this program will be running on will be very slow, maybe 600mhz and 128-256mb ram so flash just isn't going to cut it. It is basically a health diagnosis application that we will be shipping out to impoverished countries. Now comes the real question. We are wondering what language to rebuild our application in. It has to have a good gui builder associated with it, like adobe flex/air gui builder or visual studio's gui builder but the application should run on linux primarily, and if it can run on windows thats just a plus. We are all students too without really any outside help so whatever we decide to do this in there must be ample documentation available when we hit problems. Some things we have considered so far are using python and glade or c# and monodevelop, but again we really are not experts on any of this which is why I am asking for help as I would rather spend the time now choosing the right tools instead of wasting time down the line when we hit a roadblock. Thanks in advance!

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • Is it possible to do A/B testing by page rather than by individual?

    - by mojones
    Lets say I have a simple ecommerce site that sells 100 different t-shirt designs. I want to do some a/b testing to optimise my sales. Let's say I want to test two different "buy" buttons. Normally, I would use AB testing to randomly assign each visitor to see button A or button B (and try to ensure that that the user experience is consistent by storing that assignment in session, cookies etc). Would it be possible to take a different approach and instead, randomly assign each of my 100 designs to use button A or B, and measure the conversion rate as (number of sales of design n) / (pageviews of design n) This approach would seem to have some advantages; I would not have to worry about keeping the user experience consistent - a given page (e.g. www.example.com/viewdesign?id=6) would always return the same html. If I were to test different prices, it would be far less distressing to the user to see different prices for different designs than different prices for the same design on different computers. I also wonder whether it might be better for SEO - my suspicion is that Google would "prefer" that it always sees the same html when crawling a page. Obviously this approach would only be suitable for a limited number of sites; I was just wondering if anyone has tried it?

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Null reference exceptions in .net

    - by Carlo
    Hello, we're having this big problem with our application. It's a rather large application with several modules, and thousands and thousands lines of code. A lot of parts of the application are designed to exist only with a reference to another object, for example a Person object can never exists without a House object, so if you at any point in the app say: bool check = App.Person.House == null; check should always be false (by design), so, to keep using that example, while creating modules, testing, debugging, App.Person.House is never null, but once we shipped the application to our client, they started getting a bunch of NullReferenceException with the objects that by design, should never have a null reference. They tell us the bug, we try to reproduce it here, but 90% of the times we can't, because here it works fine. The app is being developed with C# and WPF, and by design, it only runs on Windows XP SP 3, and the .net framework v3.5, so we KNOW the user has the same operative system, service pack, and .net framework version as we do here, but they still get this weird NullReferenceExceptions that we can't reproduce. So, I'm just wondering if anyone has seen this before and how you fixed it, we have the app running here at least 8 hours a day in 5 different computers, and we never see those exceptions, this only happens to the client for some reason. ANY thought, any clue, any solution that could get us closer to fixing this problem will be greatly appreciated. Thanks!

    Read the article

  • Intermittent SQL Server ODBC Timeout expired

    - by Wili
    We have a bunch of VB6 applications that access two different database servers (both 32-bit windows 2003, one SQL Server 2000, one SQL Server 2005). About every ten minutes or so, we are getting a few errors: [Microsoft][ODBC SQL Server Driver]Timeout expired [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied. [Microsoft][ODBC SQL Server Driver]ConnectionRead() This is happening on more than a dozen different computers at random times. We also have IP phones that all run through the same network and those are not having any problems. We can also VNC into a users computer and reproduce the error they were getting, but VNC still continues to work. Email also works. It just seems to be an ODBC connection to SQL Server that causes the issue. The errors happen for both of our SQL Servers. We have scoured google, but haven't been able to come up with a solution. Is there anything we can try to diagnose the problem? Is there any fix out there?

    Read the article

  • Why does Tex/Latex not speed up in subsequent runs?

    - by Debilski
    I really wonder, why even recent systems of Tex/Latex do not use any caching to speed up later runs. Every time that I fix a single comma*, calling Latex costs me about the same amount of time, because it needs to load and convert every single picture file. (* I know that even changing a tiny comma could affect the whole structure but of course, a well-written cache format could see the impact of that. Also, there might be situations where 100% correctness is not needed as long as it’s fast.) Is there something in the language of Tex which makes this complicated or impossible to accomplish or is it just that in the original implementation of Tex, there was no need for this (because it would have been slow anyway on those large computers)? But then on the other hand, why doesn’t this annoy other people so much that they’ve started a fork which has some sort of caching (or transparent conversion of Tex files to a format which is faster to parse)? Is there anything I can do to speed up subsequent runs of Latex? Except from putting all the stuff into chapterXX.tex files and then commenting them out?

    Read the article

  • javascript .push() doesn't seem to work

    - by Ankur
    I am trying to use .push to put items into a javascript array. I have created a simplified piece of code. When I click the button with id clickButton, I am expecting to get a series of alerts with numbers, but I get nothing. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Sempedia | Making computers think about data the way we do</title> <link href="css/styles.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="js/jquery.js"></script> <script> $(document).ready( function() { var myArray = new Array(); //declare the array for(var i=0;i<=10;i++){ // create a loop myArray.push(i); // add an item to array } $("#clickButton").live('click',function() { //register the button being clicked for(var j=0;j<=10;j++){ //for loop alert(myArray[j]); //alert one item in the array } }); }); </script> </head> <body> <input type="button" value="click me" id="clickButton" /> </body> </html>

    Read the article

  • Some web pages (especially Apple documentation) cause heavy CPU usage in Windows IE8

    - by Mark Lutton
    Maybe this belongs in Server Fault instead, but some of you may have noticed this issue (particularly those developing on Mac, using a Windows machine to read the reference material). I posted the same question on a Microsoft forum and got one answer from someone who reproduced the problem, so it's not just my machine. No solution yet. Ever since this month's security updates, I find that many web pages cause the CPU to run at maximum for as long as the web page is visible. This happens in both IE7 and IE8 on at least three different computers (two with Windows XP, one with Vista). Here is one of the pages, running on XP with IE 8: http://learning2code.blogspot.com/2007/11/update-using-subversion-with-xcode-3.html Here is one that does it in Vista with IE8: http://developer.apple.com/iphone/library/documentation/Cocoa/Reference/Foundation/Classes/NSString_Class/Reference/NSString.html You can leave the page open for hours and the CPU is still at high usage. This doesn't happen every time. It is not always reproduceable. Sometimes it is OK the second or third time it loads. In IE7 the high usage is in ieframe.dll, version 7.0.6000.16890. In IE8 the high usage is in iertutil.dll, version 8.0.6001.18806.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >