Search Results

Search found 18845 results on 754 pages for 'the machine charmer'.

Page 674/754 | < Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >

  • Unregistering COM dll with a C# Setup Project

    - by lb
    Hi All. I've been stuck on this one for a while. I'll try explain in the simplest terms and at the best of my knowledge. I will honour any help. I've got a C# project which uses a VB6 compiled ActiveX DLL that I'm constantly updating. I compile the setup project, send it to the client and they run the setup. When building the updated setup project, I would increase the 'Version' of the setup project so it wouldn't bother with 'Another version is already installed'. What started happening after a few updates I began to notice the DLL would not be updated to the new version in the installer. The client computer had the original DLL both installed and registered. First symptom: method not found exceptions from the client C# code. This is not a shared DLL and only this application needs it. I've noticed that when uninstalling the application (through the usual procedure) the DLL is also not removed from the application folder although I would set this file's property 'Permanent' to false. The registration entries in the registry are mantained also. I do update in VS6.0 the version of the DLL (usually increase the build number) before building it. Then in VS2008, I remove it from the References, and add it again from the 'Browse tab', without re-registering it on my dev machine and adding it from the COM tab. I've thought of these options. Custom step in Setup project to regsvr32.exe /u 'hardcoded path of my dll' at uninstall (ugly) Somehow find out how the 'Isolate' property can work for me without registering Find out how to execute setup project 'Conditions' that would actually check the version of the library and to update the file accordingly at every install) Any help would be incredibly welcome.

    Read the article

  • Ideas for a rudimentary software licensing implementation

    - by Ross
    I'm trying to decide how to implement a very basic licensing solution for some software I wrote. The software will run on my (hypothetical) clients' machines, with the idea being that the software will immediately quit (with a friendly message) if the client is running it on greater-than-n machines (n being the number of licenses they have purchased). Additionally, the clients are non-tech-savvy to the point where "basic" is good enough. Here is my current design, but given that I have little to no experience in the topic, I wanted to ask SO before I started any development on it: A remote server hosts a MySQL database with a table containing two columns: client-key and license quantity The client-side application connects to the MySQL database on startup, offering it's client-key that I've put into a properties file packaged into the distribution (I would create a new distribution for each new client) Chances are, I'll need a second table to store validation history, so that with some short logic, the software can decide if it can be run on a given machine (maybe a sliding window of n machines using the software per 24 hours) If the software cannot establish a connection to the MySQL database, or decides that it's over the n allowed machines per day, it closes The connection info for the remote server hosting the MySQL database should be hard-coded into the app? (That sounds like a bad idea, but otherwise they could point it to some other always-validates-to-success server) I think that about covers my initial design. The intent being that while it certainly isn't full-proof, I think I've made it at least somewhat difficult to create an easily-sharable cracking solution. Also, I can easily adjust the license amount for a given client/key pair. I gotta figure this has been done a million times before, so tell me about a better solution that's just as simple to implement and provides the same (low) amount of security. In the event that external libraries are used, I prefer Java, as that's what the software has been written in.

    Read the article

  • Code Golf: Shortest Turing-complete interpreter.

    - by ilya n.
    I've just tried to create the smallest possible language interpreter. Would you like to join and try? Rules of the game: You should specify a programming language you're interpreting. If it's a language you invented, it should come with a list of commands in the comments. Your code should start with example program and data assigned to your code and data variables. Your code should end with output of your result. It's preferable that there are debug statements at every intermediate step. Your code should be runnable as written. You can assume that data are 0 and 1s (int, string or boolean, your choice) and output is a single bit. The language should be Turing-complete in the sense that for any algorithm written on a standard model, such as Turing machine, Markov chains, or similar of your choice, it's reasonably obvious (or explained) how to write a program that after being executred by your interpreter performs the algorithm. The length of the code is defined as the length of the code after removal of input part, output part, debug statements and non-necessary whitespaces. Please add the resulting code and its length to the post. You can't use functions that make compiler execute code for you, such as eval(), exec() or similar. This is a Community Wiki, meaning neither the question nor answers get the reputation points from votes. But vote anyway!

    Read the article

  • SharePoint Visual web part and Oracle connection problem

    - by Rishi
    Hi, I'm trying to build a "visual web part" for SharePoint 2010 which should connect to Oracle table and display records on SharePoint page.For development, Oracle 11g client (with ODP.net) ,SharePoint server 2010, Visual Studio 2010 and Oracle 10g express all running on my machine. First,I've written sample code in ASP.NET web app to connect my local Oracle table and display data in grid view and it works fine. My code is , OracleConnection con; try { // Connect string constr = "Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=XE)));User Id=SYSTEM; Password=password"; con = new OracleConnection(constr); //Open database connection con.Open(); // Execute a SQL SELECT OracleCommand cmd = new OracleCommand("select * from T_ACTIONPOINTS WHERE AP_STATUS='Active' ", con); OracleDataReader dr = cmd.ExecuteReader(); GridView.DataSource = dr; GridView.DataBind(); GridView.AllowPaging = true; } catch (Exception e) { lblError.Text = e.Message; } Now, I'm trying to create new "SharePoint" visual web part project and using same code and deploying it on my local SP server. But when it runs , I get following error here is my solution explorer, It looks something wrong in compatibility.Can someone point me in right direction ?

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Help with Neuroph neural network

    - by user359708
    For my graduate research I am creating a neural network that trains to recognize images. I am going much more complex than just taking a grid of RGB values, downsampling, and and sending them to the input of the network, like many examples do. I actually use over 100 independently trained neural networks that detect features, such as lines, shading patterns, etc. Much more like the human eye, and it works really well so far! The problem is I have quite a bit of training data. I show it over 100 examples of what a car looks like. Then 100 examples of what a person looks like. Then over 100 of what a dog looks like, etc. This is quite a bit of training data! Currently I am running at about one week to train the network. This is kind of killing my progress, as I need to adjust and retrain. I am using Neuroph, as the low-level neural network API. I am running a dual-quadcore machine(16 cores with hyperthreading), so this should be fast. My processor percent is at only 5%. Are there any tricks on Neuroph performance? Or Java peroformance in general? Suggestions? I am a cognitive psych doctoral student, and I am decent as a programmer, but do not know a great deal about performance programming.

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • Subscription website architecture questions + SQL Server & .NET

    - by chopps
    Hey Guys, I have a few questions about the architecture of a subscription service I am about to embark on and I am looking for some feedback on how best to set it up. I won’t have a large amount of customers as Basecamp, maybe a few hundred and was wondering what would be a solid architecture for setting up the customer sites. I’m running SQL Server and .NET on a dedicated machine. Should create a new database for each customer as to have control and isolation of data or keep them all in one database? I am also thinking of creating a sub-domain for each customer as well so modifications can be made to each site as needed. The customer URLs would look like this: https://customer1.foobar.com https://customer2.foobar.com I am going to have the ability to ‘plug-in’ reports that will be uploaded to the site so each customer can customize as needed. Off the top of my head this necessitates having each sub domain on its own code-base for the uploading of these reports. So on the main site the customer would sign up for their new subscription and I would programmatically create a new directory for the customer from the main code base and then create a sub domain pointing to the new directory for the customer and then finally their database. Does this sound about right? Am I on the right track? How do other such sites accomplish the same thing? Thanks for letting me bend your ear for a bit on this.

    Read the article

  • Problem with relative path to image in XAML?

    - by Giri
    I am trying to reference a PNG file in my applications working directory through XAML with the following: <Image Name="contactImage"> <Image.Source> <BitmapImage UriSource="/Images/contact.png"> </Image.Source> </Image> Now in my code-behind I try to get the height of the image with contactImage.Source.Height This fails with System.IOException - cannot locate resource 'images/contact.png'. If I use something like PngBitmapDecoder p = new PngBitmapDecoder(new Uri("./Images/contact.png"), UriKind.Relative, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default); Everything is happy. How can I reference an image in xaml to a path relative to the working deirectory of the app. BTW- this is being run on a remote machine (if that makes a difference). I have tried "./Images/contact.png" and ".\Images\contact.png" and several other combinations of back/forward slashes and dots. Here is the primary difference- Any time the file is referenced in XAML, it shows up as pack://aplication:,,, blah blah blah when I use the PngBitmapDecoder, it shows up correctly as "./Images/contact.png". How do I reference the image file in XAML and get it show a source as "./Images/contact.png" instead of a pack://application,,,blah blah blah?

    Read the article

  • Fast sign in C++ float...are there any platform dependencies in this code?

    - by Patrick Niedzielski
    Searching online, I have found the following routine for calculating the sign of a float in IEEE format. This could easily be extended to a double, too. // returns 1.0f for positive floats, -1.0f for negative floats, 0.0f for zero inline float fast_sign(float f) { if (((int&)f & 0x7FFFFFFF)==0) return 0.f; // test exponent & mantissa bits: is input zero? else { float r = 1.0f; (int&)r |= ((int&)f & 0x80000000); // mask sign bit in f, set it in r if necessary return r; } } (Source: ``Fast sign for 32 bit floats'', Peter Schoffhauzer) I am weary to use this routine, though, because of the bit binary operations. I need my code to work on machines with different byte orders, but I am not sure how much of this the IEEE standard specifies, as I couldn't find the most recent version, published this year. Can someone tell me if this will work, regardless of the byte order of the machine? Thanks, Patrick

    Read the article

  • Creating a service (SERVICE_ACCEPT_SESSIONCHANGE)

    - by Ron
    Hi there, I am trying to create a service following the example documented in the link below: http://msdn.microsoft.com/en-us/library/bb540475(v=VS.85).aspx What I am interested in is to be able to catch user "lock" and "unlock" workstation events. Using the code on from the example provided, I modified the following: Line 15: Original: VOID WINAPI SvcCtrlHandler( DWORD ); Modified: DWORD WINAPI SvcCtrlHandler( DWORD, DWORD, LPVOID, LPVOID ); Line 141: Original: gSvcStatusHandle = RegisterServiceCtrlHandler( SVCNAME, SvcCtrlHandler); Modified: gSvcStatusHandle = RegisterServiceCtrlHandlerEx( SVCNAME, SvcCtrlHandler, NULL); Line 244: Original: SvcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP; Modified: gSvcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP|SERVICE_ACCEPT_SESSIONCHANGE; Line 266: Original: VOID WINAPI SvcCtrlHandler( DWORD dwCtrl ) { // Handle the requested control code. switch(dwCtrl) { case SERVICE_CONTROL_STOP: ReportSvcStatus(SERVICE_STOP_PENDING, NO_ERROR, 0); // Signal the service to stop. SetEvent(ghSvcStopEvent); ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); return; case SERVICE_CONTROL_INTERROGATE: break; default: break; } }` Modified: DWORD WINAPI SvcCtrlHandler( DWORD dwControl, DWORD dwEventType, LPVOID lpEventData, LPVOID lpContext ) { DWORD dwErrorCode = NO_ERROR; switch(dwControl) { case SERVICE_CONTROL_STOP: ReportSvcStatus(SERVICE_STOP_PENDING, NO_ERROR, 0); // Signal the service to stop. SetEvent(ghSvcStopEvent); ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); break; case SERVICE_CONTROL_INTERROGATE: break; case SERVICE_CONTROL_SESSIONCHANGE: ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); break; default: break; } return dwErrorCode; } With the changes above, my service compiled and install fine. But when I try to start my service on my Windows 2000 machine, it does not start properly (it will be stuck on the "starting" status) Can anyone please advise? Thank you in advance, Ron

    Read the article

  • IIS SMTP server (Installed on local server) in parallel to Google Apps

    - by sharru
    I am currently using free version of Google Apps for hosting my email.It works great for my official mails my email on Google is [email protected]. In addition I'm sending out high volume mails (registrations, forgotten passwords, newsletters etc) from the website (www.mydomain.com) using IIS SMTP installed on my windows machine. These emails are sent from [email protected] My problem is that when I send email from the website using IIS SMTP to a mail address [email protected] I don’t receive the email to Google apps. (I only receive these emails if I install a pop service on the server with the [email protected] email box). It seems that the IIS SMTP is ignoring the domain MX records and just delivers these emails to my local server. Here are my DNS records for domain.com: mydomain.com A 82.80.200.20 3600s mydomain.com TXT v=spf1 ip4: 82.80.200.20 a mx ptr include:aspmx.googlemail.com ~all mydomain.com MX preference: 10 exchange: aspmx2.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx3.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx4.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx5.googlemail.com 3600s mydomain.com MX preference: 1 exchange: aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt1.aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt2.aspmx.l.google.com 3600s Please help! Thanks.

    Read the article

  • Eclipse does not start on Windows 7

    - by van
    Suddenly Eclipse today has decides to stop working. The last thing I did was close all perspectives and close Eclipse. When loading eclipse from the command prompt, using: "eclipse.exe -clean" the splash screen loads for a split second then exits. When I run the command: eclipsec -consoleLog -debug it results in the following output: Start VM: -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar -os win32 -ws win32 -arch x86_64 -showsplash d:\devtools\eclipse\\plugins\org.eclipse.platform_4.3.0.v20130605-20 00\splash.bmp -launcher d:\devtools\eclipse\eclipsec.exe -name Eclipsec --launcher.library d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher.win 32.win32.x86_64_1.1.200.v20130521-0416\eclipse_1503.dll -startup d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3.0.v201303 27-1440.jar --launcher.appendVmargs -product org.eclipse.epp.package.standard.product -consoleLog -debug -vm C:/Program Files/Java/jdk1.6.0_37/bin\..\jre\bin\server\jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified Checking Task Manager shows no Java process running and both the CPU and memory usage are very low. I have tried: Re-installing Eclipse Re-starting my machine But running eclipsec -consoleLog -debug from the command prompt still results in the issue: Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified

    Read the article

  • C# WinForms. Multiple Forms in separate threads

    - by Calum Murray
    I'm trying to run an ATM Simulation in C# with Windows Forms that can have more than one instance of an ATM machine transacting with a bank account simultaneously. The idea is to use semaphores/locking to block critical code that may lead to race conditions. My question is this: How can I run two Forms simultaneously on separate threads? In particular, how does all of this fit in with the Application.Run() that's already there? Here's my main class: public class Bank { private Account[] ac = new Account[3]; private ATM atm; public Bank() { ac[0] = new Account(300, 1111, 111111); ac[1] = new Account(750, 2222, 222222); ac[2] = new Account(3000, 3333, 333333); Application.Run(new ATM(ac)); } static void Main(string[] args) { new Bank(); } } ...that I want to run two of these forms on separate threads... public partial class ATM : Form { //local reference to the array of accounts private Account[] ac; //this is a reference to the account that is being used private Account activeAccount = null; private static int stepCount = 0; private string buffer = ""; // the ATM constructor takes an array of account objects as a reference public ATM(Account[] ac) { InitializeComponent(); //Sets up Form ATM GUI in ATM.Designer.cs this.ac = ac; } ... I've tried using Thread ATM2 = new Thread(new ThreadStart(/*What goes in here?*/)); But what method do I put in the ThreadStart constructor, since the ATM form is event-driven and there's no one method controlling it? Thanks, Calum

    Read the article

  • Parsing multibyte string in PHP

    - by Petr Peller
    I would like to write a (HTML) parser based on state machine but I have doubts how to acctually read/use an input. I decided to load the whole input into one string and then work with it as with an array and hold its index as current parsing position. There would be no problems with single-byte encoding, but in multi-byte encoding each value does not represent a character, but a byte of a character. Example: $mb_string = 'žšcr'; //4 multi-byte characters in UTF-8 for($i=0; $i < 4; $i++) { echo $mb_string[$i], PHP_EOL; } Outputs: L ž L A This means I cannot iterate through the string in a loop to check single characters, because I never know if I am in the middle of an character or not. So the questions are: How do I multi-byte safe read a single character from a string in a performance friendly way? Is it good idea to work with the string as it was an array in this case? How would you read the input?

    Read the article

  • Unpacking gems [Rails 2.3.5]

    - by yuval
    I have the following gems defined in my environment.rb file: config.gem "authlogic" config.gem "paperclip" config.gem "pauldix-feedzirra", :lib => "feedzirra", :source => "http://gems.github.com" config.gem 'whenever', :lib => false, :source => 'http://gemcutter.org/' I have them installed on my local computer and everything is working well. Since I am working on a shared-server (DreamHost), I need to unpack those gems to get them to work (can't install them as I did on my own computer to get them to work). Before uploading, I ran the following on my local machine: rake gems:unpack This created the following folders in /vender/gems: authlogic-2.1.3, paperclip-2.3.1.1, pauldix-feedzirra-0.0.18, whenever-0.4.1 So it looks like they're all there. When I run rake db:migrate on the server, though, I get these following error: Missing these required gems: pauldix-feedzirra For some reason, the feedzirra unpacked gem is not detected. Could anybody give me a clue as to why this is happening and how to solve it? Thanks!

    Read the article

  • Connect xampp to MongoDB

    - by Jhonny D. Cano -Leftware-
    Hello I have a xampp 1.7.3 instance running and a MongoDB 1.2.4 server on the same machine. I want to connect them, so I basically have been following this tutorial on php.net, it seems to connect but the cursors are always empty. I don't know what am I missing. Here is the code I am trying. The cursor-valid always says false. thanks <?php $m = new Mongo(); // connect try { $m->connect(); } catch (MongoConnectionException $ex) { echo $ex; } echo "conecta..."; $dbs = $m->listDBs(); if ($dbs == NULL) { echo $m->lastError(); return; } foreach($dbs as $db) { echo $db; } $db = $m->selectDB("CDO"); echo "elige bd..."; $col = $db->selectCollection("rep_consulta"); echo "elige col..."; $rangeQuery = array('id' => array( '$gt' => 100)); $col->insert(array('id' => 456745764, 'nombre' => 'cosa')); $cursor = $col->find()->limit(10); echo "buscando..."; var_dump($cursor); var_dump($cursor->valid()); if ($cursor == NULL) echo 'cursor null'; while($cursor->hasNext()) { $item = $cursor->current(); echo "en while..."; echo $item["nombre"].'...'; } ?> doing this by command line works perfect use CDO db.rep_consulta.find() -- lot of data here

    Read the article

  • What does "cpuid level" means ? Asking just for curiosity

    - by ogzylz
    For example, I put just 2 core info of a 16 core machine. What does "cpuid level : 6" line means? If u can provide info about lines "bogomips : 5992.10" and "clflush size : 64" I will be appreciated processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 1 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5985.23 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management:

    Read the article

  • Copy whole SQL Server database into JSON from Python

    - by Oli
    I facing an atypical conversion problem. About a decade ago I coded up a large site in ASP. Over the years this turned into ASP.NET but kept the same database. I've just re-done the site in Django and I've copied all the core data but before I cancel my account with the host, I need to make sure I've got a long-term backup of the data so if it turns out I'm missing something, I can copy it from a local copy. To complicate matters, I no longer have Windows. I moved to Ubuntu on all my machines some time back. I could ask the host to send me a backup but having no access to a machine with MSSQL, I wouldn't be able to use that if I needed to. So I'm looking for something that does: db = {} for table in database: db[table.name] = [row for row in table] And then I could serialize db off somewhere for later consumption... But how do I do the table iteration? Is there an easier way to do all of this? Can MSSQL do a cross-platform SQLDump (inc data)? For previous MSSQL I've used pymssql but I don't know how to iterate the tables and copy rows (ideally with column headers so I can tell what the data is). I'm not looking for much code but I need a poke in the right direction.

    Read the article

  • Set up Gitosis, but can't clone

    - by Tim Rupe
    I've set up Gitosis on a remote Ubuntu box which I will refer to as linuxserver as my host in the following commands. I'm also connecting from a Windows box using Cygwin. I followed the instructions according to: http://scie.nti.st/2007/11/14/hosting-git-repositories-the-easy-and-secure-way I had no problems up until I needed to clone the gitosis-admin repository to my local machine git clone git@linuxserver:gitosis-admin.git When I do this, the command executes, but hangs there displaying nothing until I ctrl-c to get back to a command prompt. No messages are displayed at all. I'm pretty sure I have my ssh keys set up properly, because logging in using "ssh linuxserver" into my regular account works perfectly without asking for a password. Edit: Over the weekend I set up a near identical Ubuntu box at home, and had no problem setting up Gitosis. The only difference was that I was connecting from OSX instead of Cygwin. Edit: I've also discovered that when using the Bash Shell provided with "Git Extensions", I have no problems, so the issue definitely seems to be some kind of Cygwin conflict. Edit: Just an update, but about a month after posting this question, I switched to Mercurial, and found that I prefer it much more than git. Thanks for the suggestions, but I don't plan on going back to git to try any of them out.

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • Autocompleting \cite{} with emacs + auctex gives "cite: no such database entry"

    - by Alejandro Weinstein
    Hi: I am running Emacs 23.1.1 and AucTeX 11.85 in an Ubuntu 8.10 machine. After opening a tex file, the first time I try to use the autocompletion of the \cite{} command, I get "cite: info not available, use `C-c &' to get it." in the minibuffer. After doing the 'C-c &', I get "byte-code: No BibTeX entry with citation key". Subsequent calls to \cite gives me the message "cite: no such database entry" . I have a \bibliography{library} in my tex file, and the \cite{} entries that I did manually work as expected. I have the following in my .emacs (require 'reftex) (setq-default TeX-master nil) (add-hook 'LaTeX-mode-hook 'TeX-PDF-mode) ;turn on pdf-mode. AUCTeX ;will call pdflatex to ;compile instead of latex. (add-hook 'LaTeX-mode-hook 'LaTeX-math-mode) ;turn on math-mode by ;default (add-hook 'LaTeX-mode-hook 'reftex-mode) ;turn on REFTeX mode by ;default (add-hook 'LaTeX-mode-hook 'flyspell-mode) ;turn on flyspell mode by ;default (setq reftex-plug-into-AUCTeX t) (setq TeX-auto-save t) (setq TeX-save-query nil) (setq TeX-parse-self t) (setq-default TeX-master nil) I also tried the suggestions in http://stackoverflow.com/questions/2699017/suggestion-for-cite-in-emacs-with-auctex, but it didn't work either. Alejandro.

    Read the article

  • BITS, TakeOwnership, and Kerberos / Windows Integrated Authentication

    - by Charlie Flowers
    We're using BITS to upload files from machines in our retail locations to our servers. BITS will stop transferring a file if the user who owns the BITS job logs off. Therefore, we're using a Windows Service running as LocalSystem to submit the jobs to BITS and be the job owner. This allows transfers to continue 24/7. However, it raises a question about authentication. We want the BITS server extensions in IIS to use Kerberos to authenticate the client machine. As far as I can tell, that leaves us with only 2 options, both of which are not ideal: Either we create an "ImageUploader" account and store its username/password in a config file that the Windows Service uses as credentials for the BITS job, or we ask the logged on user who creates the BITS job for his password, and then use his credentials for the BITS job. I guess the third option is not to use Kerberos, and maybe go with Basic Auth plus SSL. I'm sure I'm wrong and there's a better option. Is there? Thanks in advance.

    Read the article

  • ASP.NET application using old connection string.

    - by Doug S.
    I am trying to publish a website using ASP.NET MVC3 EF and CODEFIRST with a SQL Server 2008 backend. On my local machine I was using a sql express db for development, but now that I am pushing live, I want to use my hosted production database. The problem is that when I try to run the application, it is still using my local db connection string. I have completely removed the old connection string from my web.config file and am using the <clear /> tag before creating the new connection string. I have also cleaned the solution and rebuilt, but somehow it is still connecting to the old db. What am I missing? This is the new connection string: <connectionStrings> <clear /> <add name="CellularAutomataDBContext" connectionString=" Server=XXX; Database=CellularAutomata; User ID=XXX; Password=XXX; Trusted_Connection=False" providerName="System.Data.SqlClient" /> </connectionStrings>

    Read the article

< Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >