Search Results

Search found 11195 results on 448 pages for 'disconnected environment'.

Page 376/448 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Convert pre-IEEE-574 C++ floating-point numbers to/from C#

    - by Richard Kucia
    Before .Net, before math coprocessors, before IEEE-574, Microsoft defined a bit pattern for floating-point numbers. Old versions of the C++ compiler happily used that definition. I am writing a C# app that needs to read/write such floating-point numbers in a file. How can I do the conversions between the 2 bit formats? I need conversion methods in both directions. This app is going to run in a PocketPC/WinCE environment. Changing the structure of the file is out-of-scope for this project. Is there a C++ compiler option that instructs it to use the old FP format? That would be ideal. I could then exchange data between the C# code and C++ code by using a null-terminated text string, and the C++ methods would be simple wrappers around sprintf and atof functions. At the very least, I'm hoping someone can reply with the bit definitions for the old FP format, so I can put together a low-level bit manipulation algorithm if necessary. Thanks.

    Read the article

  • C#: Accessing PerformanceCounters for the ".NET CLR Memory category"

    - by Mads Ravn
    I'm trying to access the performance counters located in ".NET CLR Memory category" through C# using the PerformanceCounter class. However a cannot instantiate the categories with what I would expect was the correct category/counter name new PerformanceCounter(".NET CLR Memory", "# bytes in all heaps", Process.GetCurrentProcess().ProcessName); I tried looping through categories and counters using the following code string[] categories = PerformanceCounterCategory.GetCategories().Select(c => c.CategoryName).OrderBy(s => s).ToArray(); string toInspect = string.Join(",\r\n", categories); System.Text.StringBuilder interestingToInspect = new System.Text.StringBuilder(); string[] interestingCategories = categories.Where(s => s.StartsWith(".NET") || s.Contains("Memory")).ToArray(); foreach (string interestingCategory in interestingCategories) { PerformanceCounterCategory cat = new PerformanceCounterCategory(interestingCategory); foreach (PerformanceCounter counter in cat.GetCounters()) { interestingToInspect.AppendLine(interestingCategory + ":" + counter.CounterName); } } toInspect = interestingToInspect.ToString(); But could not find anything that seems to match. Is it not possible to observe these values from within the CLR or am I doing something wrong. The environment, should it matter, is .NET 4.0 running on a 64-bit windows 7 box.

    Read the article

  • Can I ignore a SIGFPE resulting from division by zero?

    - by Mikeage
    I have a program which deliberately performs a divide by zero (and stores the result in a volatile variable) in order to halt in certain circumstances. However, I'd like to be able to disable this halting, without changing the macro that performs the division by zero. Is there any way to ignore it? I've tried using #include <signal.h> ... int main(void) { signal(SIGFPE, SIG_IGN); ... } but it still dies with the message "Floating point exception (core dumped)". I don't actually use the value, so I don't really care what's assigned to the variable; 0, random, undefined... EDIT: I know this is not the most portable, but it's intended for an embedded device which runs on many different OSes. The default halt action is to divide by zero; other platforms require different tricks to force a watchdog induced reboot (such as an infinite loop with interrupts disabled). For a PC (linux) test environment, I wanted to disable the halt on division by zero without relying on things like assert.

    Read the article

  • Why is OpenSubKey() returning null on my Win 7 64 bit system?

    - by BrMcMullin
    Has anyone seen OpenSubKey() and other Microsoft.Win32 registry functions return null on 64 bit systems when 32 bit registry keys are under Wow6432node in the registry? I'm working on a unit testing framework that makes a call to OpenSubKey() from the .net library. My dev system is a Win 7 64 bit environment with VS 2008 SP1 and the Win 7 SDK installed. The application we're unit testing is a 32 bit application, so the registry is virtualized under HKLM\Software\Wow6432node. When we call: Registry.LocalMachine.OpenSubKey( @"Software\MyCompany\MyApp\" ); Null is returned, however explicitly stating to look here works: Registry.LocalMachine.OpenSubKey( @"Software\Wow6432node\MyCompany\MyApp\" ); From what I understand this function should be agnostic to 32 bit or 64 bit environments and should know to jump to the virtual node. Even stranger is the fact that the exact same call inside a compiled and installed version of our application is running just fine on the same system and is getting the registry keys necessary to run; which are also being placed in HKLM\Software\Wow6432node. Any suggestions? Thanks in advance!

    Read the article

  • Test-First development tool for SQL Server 2005?

    - by Jeff Jones
    For several years I have been using a testing tool called qmTest that allows me to do test-driven database development for some Firebird databases. I write a test for a new feature (table, trigger, stored procedure, etc.) until it fails, then modify the database until the test passes. If necessary, I do more work on the test until it fails again, then modify the database until the test passes. Once the test for the feature is complete and passes 100% of the time, I save it in a suite of other tests for the database. Before moving on to another test or a deployment, I run all the tests as a suite to make sure nothing is broken. Tests can have dependencies on other tests, and the results are recorded and displayed in a browser. Nothing new here, I am sure. Our shop is aiming toward standardizing on MSSQLServer and I want to use the same procedure for developing our databases. Does anyone know of tools that allow or encourage this kind of development? I believe the Team System does, but we do not own that at this point, and probably will not for some time. I am not opposed to scripting, but would welcome a more graphical environment. Any suggestions?

    Read the article

  • Can ElementTree be told to preserve the order of attributes?

    - by dmckee
    I've written a fairly simple filter in python using ElementTree to munge the contexts of some xml files. And it works, more or less. But it reorders the attributes of various tags, and I'd like it to not do that. Does anyone know a switch I can throw to make it keep them in specified order? Context for this I'm working with and on a particle physics tool that has a complex, but oddly limited configuration system based on xml files. Among the many things setup that way are the paths to various static data files. These paths are hardcoded into the existing xml and there are no facilities for setting or varying them based on environment variables, and in our local installation they are necessarily in a different place. This isn't a disaster because the combined source- and build-control tool we're using allows us to shadow certain files with local copies. But even thought the data fields are static the xml isn't, so I've written a script for fixing the paths, but with the attribute rearrangement diffs between the local and master versions are harder to read than necessary. This is my first time taking ElementTree for a spin (and only my fifth or sixth python project) so maybe I'm just doing it wrong. Abstracted for simplicity the code looks like this: tree = elementtree.ElementTree.parse(inputfile) i = tree.getiterator() for e in i: e.text = filter(e.text) tree.write(outputfile) Reasonable or dumb? Related links: How can I get the order of an element attribute list using Python xml.sax? Preserve order of attributes when modifying with minidom

    Read the article

  • Ruby script/console and Ruby script/server using two different DBs?

    - by aronchick
    Has anyone seen where script/console and script/server load two different databases (though both report using the same)? Here's the first output $ script/server => Booting WEBrick => Rails 2.3.5 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-03-21 15:54:05] INFO WEBrick 1.3.1 [2010-03-21 15:54:05] INFO ruby 1.8.7 (2010-01-10) [i386-mingw32] [2010-03-21 15:54:05] INFO WEBrick::HTTPServer#start: pid=7148 port=3000 No errors. I then run my standard code for entering a form - no problems. Checking the Dev Database (.yml at bottom): mysql> select * from books; [...] | 712 | Book | Book Name | 2010-03-21 22:29:22 | 2010-03-21 22:29:22 | [...] 712 rows in set (0.00 sec) The code CLEARLY saved it seconds ago And now here's the output of script/console: $ script/console Loading development environment (Rails 2.3.5) >> Books.all => [] Nothing. Further, upon further inspection, it's using the production database, but I can't figure out why. Any thoughts here? All consoles have been closed and reopened. UPDATE: Requested .yml file (can't see how it'd be helpful (user name and password are all the same for each)) - development: adapter: mysql database: BooksDBdev username: <user name> password: <long string> timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql database: BooksDBtest username: <user name> password: <long string> timeout: 5000 production: adapter: mysql database: BooksDB username: <user name> password: <long string> timeout: 5000

    Read the article

  • Need events to execute on timer events, metronome precision.

    - by user295734
    I setup a timer to call an event in my application. The problme is the event execution is being skewed by other windows operations. Ex. openning and window, loading a webpage. I need the event to excute exactly on time, every time. When i first set up the app, used a sound file, like a metronome to listen to the even firing, in a steady state, its firing right on, but as soon do something in the windows environment, the sound fires slower, then sort of sppeds up a bit to catch up. So i added a logging method to the event to ctahc the timer ticks. From that data, it appears that the timer is not being affected by the windows app, but my application event calls are being affected. I figured this out by checking the datetime.now in the event, and if i set it to 250 milliseconds, which is 4 clicks per second. You get data something like below. (sec):(ms) 1:000 1:250 1:500 1:750 2:000 2:250 2:500 2:750 3:000 3:250 3:500 3:750 (lets say i execute some windows event)(time will skew) 4:122 4:388 4:600 4:876 (stop doing what i was doing in windows) (going to shorten the data for simplicit, my list was 30sec long) 5:124 5:268 5:500 5:750 (you would se the time go back the same milliseconds it was at the begining) 6:000 6:250 6:500 6:750 7:000 7:250 7:500 7:750 So i'm thinking the timer continues to fire on the same millisecond every time, but its the event that is being skewed to fire off time by other windows operations. Its not a huge skew, but for what i need to accomplish, its unacceptable. Is there anyhting i can do in .NET, hoping to use XAML/WPF application, thats will correct the skewing of events? thx.

    Read the article

  • How to move/drag multiple images with CALayer???

    - by gbf.sara
    I have more than 10 images in screen and i want to move each image using CALayer concept. While am trying to move first image,its working. Even when i drag my second image also, its working. But after that when i get back to first image and when i try to move , am struck up. its not working. I dono where it went wrong. Here's ma code //Assigning Layer to UIImage// layer = [CALayer layer]; UIImage *image1 = [UIImage imageNamed:[NSString stringWithFormat:@"%@_thumb.png",[tiv imageName]]]; layer.contents = (id)image1.CGImage; layer.position=CGPointMake(200,200); layer.frame = CGRectMake(110, 180, 100, 100); [self.view.layer addSublayer:layer]; [layer needsDisplay]; [self.view.layer needsDisplay]; [layer needsLayout]; //Here i try to move my image// - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; NSArray *allTouches = [touches allObjects]; int count = [allTouches count]; if (count == 1) { if (CGRectContainsPoint([layer frame], [[allTouches objectAtIndex:0] locationInView:self.view])) { layer.position = [[allTouches objectAtIndex:0] locationInView:self.view]; return; } } if (count > 1) { if (CGRectContainsPoint([layer frame], [[allTouches objectAtIndex:-1] locationInView:self.view])) { layer.position = [[allTouches objectAtIndex:0] locationInView:self.view]; return; } } } Also ive planned to use single CALayer for multiple images.. When i click an image, it should be placed in that CALayer and when i click another, the same CALayer should take the latter one. So tat whenever i drag an image it should move(no matter how many images r there) . Do u think this concept will work out or is there any other idea??? My concept may seems to be a basic one. Am new to this environment n so kindly help me..

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • Send document to printer from web page

    - by FiveTools
    I have a webpage that activates a print job on a printer. This works in the localhost environment but does not work when the application is deployed to the webserver. I'm using the PrintDocument class from the .net System.Drawing.Print namespace. I'm now assuming the printer has to be available to the application on the remote server? Any suggestions on how I would get this to work? PrintDocument pd = new PrintDocument(); PaperSource ps = new PaperSource(); pd.DefaultPageSettings.PaperSize = new System.Drawing.Printing.PaperSize("Custom", 1180, 850); pd.PrintPage += new PrintPageEventHandler (this.pd_PrintPage); // Set your printer's name. Obtain from // System's Printer Dialog Box. pd.PrinterSettings.PrinterName = "Okidata ML 321 Turbo/D (IBM)"; //PrintPreviewDialog dlgPrintPvw = new PrintPreviewDialog(); //dlgPrintPvw.Document = pd; //dlgPrintPvw.Focus(); //dlgPrintPvw.ShowDialog(); pd.Print();

    Read the article

  • Run an ActiveX through Web

    - by balexandre
    We have a webpage that works fine on the local computer as it uses a COM Object that is only available in the local computer. the program generates HTML code: <html> <head> <script type="text/javascript"> <!-- function ResizeControl(){Y = document.body.clientHeight;if (Y < 1) {Y = 1}X = document.body.clientWidth;if (X < 1) {X = 1}ActiveX.width = X;ActiveX.height = Y} --> </script> <style type="text/css">html, body { overflow:hidden; } </style> </head> <body OnResize="ResizeControl()" OnLoad="ResizeControl()" leftmargin="0" topmargin="0" rightmargin="0" bottommargin="0"> <object id="ActiveX" classid="CLSID:8EC68701-329D-4567-BCB5-9EE4BA43D358" width="14" height="14"> <PARAM NAME="tabName" VALUE="Complaints"> </object> </body> </html> and shows fine My question is, How can we port this into a web environment? As the Delphi developer has no idea and I'm not a Delphi fellow. I want to be able to use this "webpage" on a web address http://INTRANET/mysite/thispage.html Any idea, any though, any door to open is greatly appreciate :)

    Read the article

  • ColdFusion blank page in IE7 on refresh?

    - by richardtallent
    I'm new to ColdFusion, have a very basic problem that's really slowing me down. I'm making edits in a text editor and refreshing the page in web browsers for testing. Standard web dev stuff, no browser-sniffing, redirection, or other weirdness, and no proxies involved. When I refresh the page in Chrome or Firefox, everything works fine, but when I refresh in IE7, I get a blank page. View Source shows me: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=Content-Type content="text/html; charset=utf-8"></HEAD> <BODY></BODY></HTML> That's it. While I am rendering to the transitional DTD, the real head contains a title, etc. My development server is CF 9, production is 8. This problem has been happening in both. Seems it may only be happening on pages that are the the result of a POST action. I've never experienced this in ASP.NET (my usual development environment) using the same browsers.

    Read the article

  • Why is there unreachable code here?

    - by Richard
    I am writing a c# app and want to output error messages to either the console or a messagebox (Depending on the app type: enum AppTypeChoice { Console, Windows } ), and also control wether the app keeps running or not ( bool StopOnError ). I came up with this method that will check all the criteria, but I'm getting an "unreachable code detected" warning. I can't see why! Here is the whole method (Brace yourselves for some hobbyist code!) public void OutputError(string message) { string standardMessage = "Something went WRONG!. [ But I'm not telling you what! ]"; string defaultMsgBoxTitle = "Aaaaarrrggggggggggg!!!!!"; string dosBoxOutput = "\n\n*** " + defaultMsgBoxTitle + " *** \n\n Message was: '" + message + "'\n\n"; AppTypeChoice appType = DataDefs.AppType; DebugLevelChoice level = DataDefs.DebugLevel; // Decide how much info we should give out here... if (level != DebugLevelChoice.None) { // Give some info.... if (appType == AppTypeChoice.Windows) MessageBox.Show(message, defaultMsgBoxTitle, MessageBoxButtons.OK, MessageBoxIcon.Error); else Console.WriteLine(dosBoxOutput); } else { // Be very secretive... if (appType == AppTypeChoice.Windows) MessageBox.Show(standardMessage, defaultMsgBoxTitle, MessageBoxButtons.OK, MessageBoxIcon.Error); else Console.WriteLine(standardMessage); } // Decide if app falls over or not.. if (DataDefs.StopOnError == true) Environment.Exit(0); // UNREACHABLE CODE HERE } Also, while I have your attention, to get the app type, I'm just using a constant at the top of the file (ie. AppTypeChoice.Console in a Console app etc) - is there a better way of doing this (i mean finding out in code if it is a DOS or Windows app)? Also, I noticed that I can use a messagebox with a fully-qualified path in a Console app...How bad is is to do that ( I mean, will I get tarred and feathered when other developers see it?!) Thanks for your help

    Read the article

  • OdbcCommand on Stored Procedure - "Parameter not supplied" error on Output parameter

    - by Aaron
    I'm trying to execute a stored procedure (against SQL Server 2005 through the ODBC driver) and I recieve the following error: Procedure or Function 'GetNodeID' expects parameter '@ID', which was not supplied. @ID is the OUTPUT parameter for my procedure, there is an input @machine which is specified and is set to null in the stored procedure: ALTER PROCEDURE [dbo].[GetNodeID] @machine nvarchar(32) = null, @ID int OUTPUT AS BEGIN SET NOCOUNT ON; IF EXISTS(SELECT * FROM Nodes WHERE NodeName=@machine) BEGIN SELECT @ID = (SELECT NodeID FROM Nodes WHERE NodeName=@machine) END ELSE BEGIN INSERT INTO Nodes (NodeName) VALUES (@machine) SELECT @ID = (SELECT NodeID FROM Nodes WHERE NodeName=@machine) END END The following is the code I'm using to set the parameters and call the procedure: OdbcCommand Cmd = new OdbcCommand("GetNodeID", _Connection); Cmd.CommandType = CommandType.StoredProcedure; Cmd.Parameters.Add("@machine", OdbcType.NVarChar); Cmd.Parameters["@machine"].Value = Environment.MachineName.ToLower(); Cmd.Parameters.Add("@ID", OdbcType.Int); Cmd.Parameters["@ID"].Direction = ParameterDirection.Output; Cmd.ExecuteNonQuery(); _NodeID = (int)Cmd.Parameters["@Count"].Value; I've also tried using Cmd.ExecuteScalar with no success. If I break before I execute the command, I can see that @machine has a value. If I execute the procedure directly from Management Studio, it works correctly. Any thoughts? Thanks

    Read the article

  • How do I fix this installation problem with multicore Solr on Ubuntu 10.04?

    - by coleifer
    Following instructions from the two sites below, I've installed Tomcat 6 and Solr 1.4. http://gist.github.com/204638 https://wiki.fourkitchens.com/display/TECH/Solr+1.4+on+Ubuntu+9.10+and+CentOS+5 I have successfully got it up and running on a server running 9.04 with multicore support, but on the 10.04 I can't seem to get it to work. I am able to reach localhost:xxxx/solr/ on the 10.04 box and see a single link to the Solr Admin, but following the link takes me to a 404 page with the following output: /solr/admin/ HTTP Status 404 - missing core name in path The requested resource (missing core name in path) is not available I am also unable to access /solr/site1/ as I would except - it similarly returns a 404. <!-- from /var/solr/solr.xml, site dirs exist --> <cores adminPath="/admin/cores"> <core name="site1" instanceDir="site1" /> <core name="site2" instanceDir="site2" /> </cores> <!-- from /etc/tomcat6/Catalina/localhost/solr.xml --> <Context docBase="/var/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" /> </Context>

    Read the article

  • Can I split system.serviceModel into a separate .config file?

    - by Mr Bell
    I want to separate my system.serviceModel section of the web.config into a separate file to facilitate some environment settings. My efforts have been fruitless. When I attempt it using this method. The wcf code throws an exception: "The type initializer for 'System.ServiceModel.ClientBase 1 threw an exception. Can anyone tell me what I am doing wrong? Web.config: <configuration> <system.serviceModel configSource="MyWCF.config" /> .... MyWCF.config: <system.serviceModel> <extensions> ... </extensions> <bindings> ... </bindings> <behaviors> ... </behaviors> <client> ... </client> </system.serviceModel>

    Read the article

  • How do I lookup a JNDI Datasource from outside a web container?

    - by masotime
    I have the following environment set up: Java 1.5 Sun Application Server 8.2 Oracle 10 XE Struts 2 Hibernate I'm interested to know how I can write code for a Java client (i.e. outside of a web application) that can reference the JNDI datasource provided by the application server. The ports for the Sun Application Server are all at their defaults. There is a JNDI datasource named jdbc/xxxx in the server configuration, but I noticed that the Hibernate configuration for the web application uses the name java:comp/env/jdbc/xxxx instead. Most of the examples I've seen so far involve code like Context ctx = new InitialContext(); ctx.lookup("jdbc/xxxx"); But it seems I'm either using the wrong JNDI name, or I need to configure a jndi.properties or other configuration file to correctly point to a listener? I have appserv-rt.jar from the Sun Application Server which has a jndi.properties inside of it, but it does not seem to help. There's a similar question here, but it doesn't give any code / refers to having iBatis obtain the JNDI Datasource automatically: http://stackoverflow.com/questions/39053/accessing-datasource-from-outside-a-web-container-through-jndi

    Read the article

  • Cannot upload media via Wordpress uploader

    - by Justin Johnson
    This has to do with media uploading in Wordpress. Every time WP creates a folder for new uploads (it organizes uploads by year and month: yyyy/mm), it creates it with the "apache:apache' user and group, with full access to all (777 or drwxrwxrwx). However, after that, WP cannot create a folder within that folder (e.g.: mkdir 2011 succeeds, but mkdir 2011/01 fails). Also, uploads cannot be moved into these newly created folders even though the permissions are 777 (rwxrwxrwx). Once a month, I have to chown the newly created folders to be the same as user:group as the rest of the files. Once I do that, uploading works fine (which doesn't make sense to me The really frustrating part is that this problem doesn't exist in other WP installs on other domains on the same server. * I wasn't sure if this should be here or on serverfault. Edit: The containing directory /.../httpdocs/blog/wp-content/uploads has the correct ownership drwxrwxrwx 5 myuser psaserv 4096 Jun 3 18:38 uploads This is a Plesk/CentOS environment hosted by Media Temple (dv). I've written the following test script to simulate the problem <pre><?php $d = "d" . mt_rand(100, 500); var_dump( get_current_user(), $d, mkdir($d), chmod($d, 0777), mkdir("$d/$d"), chmod("$d/$d", 0777), fileowner($d), getmyuid() ); The script always creates the first directory mkdir($d) successfully. On domain A, where the WP problem is, it cannot create the nested directory mkdir("$d/$d"). However, on domain B, both directories are successfully created. I am running each script at /var/www/vhosts/domainA/httpdocs/tmp/t.php and /var/www/vhosts/domainB/httpdocs/tmp/t.php respectively I checked the permissions on tmp, httpdocs, and domain[AB] and they are the same for each path. The only thing that differs is the user.

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • Continuous Integration with 64-bit Sharepoint and TFS 2008?

    - by Hirvox
    I've set up a 64-bit TFS 2008 build server with Sharepoint, continuous integration and out-of-the-box MSTest. Unit tests for plain business logic classes run just fine and test results are published into TFS. However, any test that uses Sharepoint's API fails horribly, SPFarm.Local returning null and so on. Is there a way to fix this? The tests run fine in an otherwise identical 32-bit development environment (Windows Server 2008 under Hyper-V, Sharepoint patched up to June 2009 cumulative update) from both Visual Studio and command line, so the problem is not about improper use of SPContext.Current or any other part of the API that needs to be run in a web server context. I've ruled out permissions issues, because the build agent account can deploy the solution and create site collections just fine with stsadm. The next culprit could be that the unit tests were being run with a 32-bit process, which couldn't access the 64-bit Sharepoint API properly. I tried a workaround, but it has the side effect of disabling TFS support in MSTest. Do I have to wait for 2010 versions of MS tools (and hope for the best) or is there a third-party test framework available that runs natively in 64 bit and can publish test results into TFS 2008?

    Read the article

  • Quickest algorithm for finding sets with high intersection

    - by conradlee
    I have a large number of user IDs (integers), potentially millions. These users all belong to various groups (sets of integers), such that there are on the order of 10 million groups. To simplify my example and get to the essence of it, let's assume that all groups contain 20 user IDs (i.e., all integer sets have a cardinality of 100). I want to find all pairs of integer sets that have an intersection of 15 or greater. Should I compare every pair of sets? (If I keep a data structure that maps userIDs to set membership, this would not be necessary.) What is the quickest way to do this? That is, what should my underlying data structure be for representing the integer sets? Sorted sets, unsorted---can hashing somehow help? And what algorithm should I use to compute set intersection)? I prefer answers that relate C/C++ (especially STL), but also any more general, algorithmic insights are welcome. Update Also, note that I will be running this in parallel in a shared memory environment, so ideas that cleanly extend to a parallel solution are preferred.

    Read the article

  • Visual Studio: How to attach a debugger dynamically to a specific process

    - by Jeff Cyr
    I am building an internal dev tool to manage different processes commonly used in our development environment. The tool show the list the monitored processes, indicate their running state and allow to start or stop each process. I'd like to add the functionality of attaching a debugger to a monitored process from my tool instead of going in 'Debug-Attach to process' in visual studio and finding the process. My goal is to have something like Debugger.Launch() that would show a list of the available visual studio. I can't use Debugger.Launch() because it lauches the debugger on the process that make the call. I would need something like Debugger.Launch(processId). Does anyone know how to acheive this functionality? A solution could be to implement a command in each monitored process to call Debugger.Launch() when the command is received from the monitoring tool, but I would prefer something that does not require to modify the code of the monitored processes. Side question: When using Debugger.Launch(), instances of Visual Studio that already have a debugger attached are not listed. Visual Studio is not limited to one attached debugger, you can attach on multiple process when using 'Debug - Attach to process'. Anyone know how to bypass this limitation when using Debugger.Launch() or an alternative?

    Read the article

  • ruby on rails: undefined method "version_requirements' when attempting to start server after new install

    - by ezabak
    Hi there, I had to newly install ruby on rails recently. When I attempted to start the server for a project I had already been working on previous to this new install, I received the following error: $ ruby script/server => Booting WEBrick... ./script/../config/../vendor/rails/railties/lib/rails/gem_dependency.rb:107:in `requirement': undefined method `version_requirements' for #<Gem::Dependency:0xb74bf764> (NoMethodError) from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `map' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:165:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `run' from /media/78C0-455B/bidmc/schedule/config/environment.rb:13 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/servers/webrick.rb:59 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/server.rb:49 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from script/server:3 I have the latest versions of ruby, rubygems, and rails. Any suggestions? Thanks.

    Read the article

  • Rubygems on Debian: Gems won't load (LoadError)

    - by daswerth
    I've installed the development version of Crunchbang, a linux distro based off Debian. I got Ruby and Rubygems installed, but I can't get the gems I've installed to load. Here is a command-line session: $ ruby -v ruby 1.9.1p378 (2010-01-10 revision 26273) [i486-linux] $ gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.9.1 (2010-01-10 patchlevel 378) [i486-linux] - INSTALLATION DIRECTORY: /usr/lib/ruby1.9.1/gems/1.9.1 - RUBY EXECUTABLE: /usr/bin/ruby1.9.1 - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-linux - GEM PATHS: - /usr/lib/ruby1.9.1/gems/1.9.1 - /home/corey/.gem/ruby/1.9.1 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - REMOTE SOURCES: - http://rubygems.org/ $ echo $PATH /home/corey/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/home/corey/.gem/ruby/1.9.1:/usr/lib/ruby1.9.1/gems/1.9.1 $ gem list -d nokogiri `*** LOCAL GEMS ***` nokogiri (1.4.1) Authors: Aaron Patterson, Mike Dalessio Rubyforge: http://rubyforge.org/projects/nokogiri Homepage: http://nokogiri.org Installed at: /usr/lib/ruby1.9.1/gems/1.9.1 Nokogiri (?) is an HTML, XML, SAX, and Reader parser $ ruby -r rubygems -e "require 'nokogiri'" -e:1:in `require': no such file to load -- nokogiri (LoadError) from -e:1:in `' I've encountered similar problems on Ubuntu before, but they were easy to fix. I can't figure out what's wrong in this particular case, and Google didn't seem to know either. Any help would be greatly appreciated! By the way... this is my first submission to stackoverflow. I hope this question is relevant. :)

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >