Search Results

Search found 10691 results on 428 pages for 'batch insert'.

Page 287/428 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • What's the best way to fill or paint around an image in Java?

    - by wsorenson
    I have a set of images that I'm combining into a single image mosaic using JAI's MosaicDescriptor. Most of the images are the same size, but some are smaller. I'd like to fill in the missing space with white - by default, the MosaicDescriptor is using black. I tried setting the the double[] background parameter to { 255 }, and that fills in the missing space with white, but it also introduces some discoloration in some of the other full-sized images. I'm open to any method - there are probably many ways to do this, but the documentation is difficult to navigate. I am considering converting any smaller images to a BufferedImage and calling setRGB() on the empty areas (though I am unsure what to use for the scansize on the batch setRGB() method). My question is essentially: What is the best way to take an image (in JAI, or BufferedImage) and fill / add padding to a certain size? Is there a way to accomplish this in the MosaicDescriptor call without side-effects? For reference, here is the code that creates the mosaic: for (int i = 0; i < images.length; i++) { images[i] = JPEGDescriptor.create(new ByteArraySeekableStream(images[i]), null); if (i != 0) { images[i] = TranslateDescriptor.create(image, (float) (width * i), null, null, null); } } RenderedOp finalImage = MosaicDescriptor.create(ops, MosaicDescriptor.MOSAIC_TYPE_OVERLAY, null, null, null, null, null);

    Read the article

  • Poll multiple desktops/servers on a network remotely to determine the IP Type: Static or DHCP

    - by Charles Laird
    Had a gentleman answer 90% of my original question, which is to say I now have the ability to poll a device that I am running the below script on. The end goal is to obtain IP type: Static or DHCP on all desktop/servers on a network I support. I have the list of servers that I will input in a batch file, just looking for the code to actually poll the other devices on the network from one location. Output to be viewed: Device name: IP Address: MAC Address: Type: Marvell Yukon 88E8001/8003/8010 PCI Gigabit Ethernet Controller NULL 00:00:F3:44:C6:00 DHCP Generic Marvell Yukon 88E8056 based Ethernet Controller 192.168.1.102 00:00:F3:44:D0:00 DHCP ManagementClass objMC = new ManagementClass("Win32_NetworkAdapterConfiguration"); ManagementObjectCollection objMOC = objMC.GetInstances(); txtLaunch.Text = ("Name\tIP Address\tMAC Address\tType" +"\r\n"); foreach (ManagementObject objMO in objMOC) { StringBuilder builder = new StringBuilder(); object o = objMO.GetPropertyValue("IPAddress"); object m = objMO.GetPropertyValue("MACAddress"); if (o != null || m != null) { builder.Append(objMO["Description"].ToString()); builder.Append("\t"); if (o != null) builder.Append(((string[])(objMO["IPAddress"]))[0].ToString()); else builder.Append("NULL"); builder.Append("\t"); builder.Append(m.ToString()); builder.Append("\t"); builder.Append(Convert.ToBoolean(objMO["DHCPEnabled"]) ? "DHCP" : "Static"); builder.Append("\r\n"); } txtLaunch.Text = txtLaunch.Text + (builder.ToString()); I'm open to recommendations here.

    Read the article

  • Seeking help with a MT design pattern

    - by SamG
    I have a queue of 1000 work items and a n-proc machine (assume n = 4).The main thread spawns n (=4) worker threads at a time ( 25 outer iterations) and waits for all threads to complete before processing the next n (=4) items until the entire queue is processed for(i= 0 to queue.Length / numprocs) for(j= 0 to numprocs) { CreateThread(WorkerThread,WorkItem) } WaitForMultipleObjects(threadHandle[]) The work done by each (worker) thread is not homogeneous.Therefore in 1 batch (of n) if thread 1 spends 1000 s doing work and rest of the 3 threads only 1 s , above design is inefficient,becaue after 1 sec other 3 processors are idling. Besides there is no pooling - 1000 distinct threads are being created How do I use the NT thread pool (I am not familiar enough- hence the long winded question) and QueueUserWorkitem to achieve the above. The following constraints should hold The main thread requires that all worker items are processed before it can proceed.So I would think that a waitall like construct above is required I want to create as many threads as processors (ie not 1000 threads at a time) Also I dont want to create 1000 distinct events, pass to the worker thread, and wait on all events using the QueueUserWorkitem API or otherwise Exisitng code is in C++.Prefer C++ because I dont know c# I suspect that the above is a very common pattern and was looking for input from you folks.

    Read the article

  • Can this method to convert a name to proper case be improved?

    - by Kelsey
    I am writing a basic function to convert millions of names (one time batch process) from their current form, which is all upper case, to a proper mixed case. I came up with the following so far: public string ConvertToProperNameCase(string input) { TextInfo textInfo = new CultureInfo("en-US", false).TextInfo; char[] chars = textInfo.ToTitleCase(input.ToLower()).ToCharArray(); for (int i = 0; i + 1 < chars.Length; i++) { if ((chars[i].Equals('\'')) || (chars[i].Equals('-'))) { chars[i + 1] = Char.ToUpper(chars[i + 1]); } } return new string(chars);; } It works in most cases such as: JOHN SMITH - John Smith SMITH, JOHN T - Smith, John T JOHN O'BRIAN - John O'Brian JOHN DOE-SMITH - John Doe-Smith There are some edge cases that do no work like: JASON MCDONALD - Jason Mcdonald (Correct: Jason McDonald) OSCAR DE LA HOYA - Oscar De La Hoya (Correct: Oscar de la Hoya) MARIE DIFRANCO - Marie Difranco (Correct: Marie DiFranco) These are not captured and I am not sure if I can handle all these odd edge cases. Can anyone think of anything I could change or add to capture more edge case? I am sure there are tons of edge cases I am not even thinking of as well. All casing should following North American conventions too meaning that if certain countries expect a specific capitalization format, and that differs from the North American format, then the North American format takes precedence.

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • Usage of putty in command line from Hudson

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij EDIT: These are the build logs: [workspace] $ cmd /c call C:\WINDOWS\TEMP\hudson7429256014041663539.bat C:\Hudson\jobs\Artifact deployer\workspace>putty -ssh -2 -P 22 USER@SERV_ADD -pw PASS -m com.txt Le build a été annulé Finished: ABORTED And the Hudson.err.log file at the same time (after a stop): 3 juin 2010 18:27:28 hudson.model.Run run INFO: Artifact deployer #6 aborted java.lang.InterruptedException at java.lang.ProcessImpl.waitFor(Native Method) at hudson.Proc$LocalProc.join(Proc.java:179) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1241) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) My shell script only write "hello" in a "hello.txt" file on the server, and nothing is done.

    Read the article

  • Thin and Bundler on Windows Rails

    - by Bob
    Trying to get Thin working with Bundle on Windows, I know, major PITA but anyways, I'm new to Thin and Bundle gem, I'm on Ruby 1.8.6 and Rails 2.3.5 and trying to get someone else's app running on my laptop, the app uses Thin and Bundle gem to install gems required. I noticed that bundle created a .bundle folder under My Documents folder and put all the gems there for the app. When I tried "thin run", it reported 'thin' is not recognized as an internal or external command, operable program or batch file. I check the environment path and it doesn't point to the .bundle folder at all and I found there is a thin.bat in C:\Documents and Settings\Bob\.bundle\ruby\1.8\bin When I tried "C:\Documents and Settings\Bob.bundle\ruby\1.8\bin\thin" start, it gave me another error c:/ruby/lib/ruby/site_ruby/1.8/rubygems.rb:777:in `report_activate_error': Could not find RubyGem thin (>= 0) (Gem::LoadError) from c:/ruby/lib/ruby/site_ruby/1.8/rubygems.rb:211:in `activate' from c:/ruby/lib/ruby/site_ruby/1.8/rubygems.rb:1056:in `gem' from C:/Documents and Settings/Bob/.bundle/ruby/1.8/bin/thin:18 I get the same error if I added "C:\Documents and Settings\Bob.bundle \ruby\1.8\bin" to the env path. Anyone know I can get this working?

    Read the article

  • Sybase: how can I remove non-printable characters from CHAR or VARCHAR fields with SQL?

    - by Kenny Drobnack
    I'm working with a Sybase database that seems to have non-printable characters in some of the string fields and this is throwing off some of our processing code. At first glance, it seemed to only be newlines and carriage returns, but we also have an ASCII code 27 in there - an ESC character, some accented characters, and some other oddities in there. I have no direct access to change the database, so changing the bad data isn't an option, yet. For now I have to make do with just filtering it out. We're trying to export the table data from one database and load it into a database used by another application in a nightly batch process. Ideally, I'd like to have a function that I can pass a list of characters and just have Sybase return the data with those characters removed. I'd like to keep it something we could do in plain SQL if possible. Something like this to remove characters that are ASCII 0 - 31. select str_replace(FIELD1, (0-31), NULL) as FIELD1, str_replace(FIELD2, (0-31), NULL) as FIELD2 from TABLE So far, str_replace is the nearest I can find, but it only allows replacing one string with another. No support for character ranges and won't let me do the above. We're running on Sybase ASE 12.5 on Unix servers.

    Read the article

  • How to solve non-linear equations using python

    - by stars83clouds
    I have the following code: #!/usr/bin/env python from scipy.optimize import fsolve import math h = 6.634e-27 k = 1.38e-16 freq1 = 88633.9360e6 freq2 = 88631.8473e6 freq3 = 88630.4157e6 def J(freq,T): return (h*freq/k)/(math.exp(h*freq/(k*T))-1) def equations(x,y,z,w,a,b,c,d): f1 = a*(J(freq1,y)-J(freq1,2.73))*(1-math.exp(-a*z))-(J(freq2,x)-J(freq2,2.73))*(1-math.exp(-z)) f2 = b*(J(freq3,w)-J(freq3,2.73))*(1-math.exp(-b*z))-(J(freq2,x)-J(freq2,2.73))*(1-math.exp(-z)) f3 = c*(J(freq3,w)-J(freq3,2.73))*(1-math.exp(-b*z))-(J(freq1,y)-J(freq1,2.73))*(1-math.exp(-a*z)) f4 = d*(J((freq3+freq1)/2,(y+w)/2)-J((freq3+freq1)/2,2.73))-(J(freq2,x)-J(freq2,2.73))*(1-math.exp(-z)) return (f1,f2,f3,f4) So, I have defined the equations in the above code. However, I now wish to solve the above set of equations using fsolve or other alternative non-linear numerical routine. I tried the following syntax but with no avail: x,y,z,w = fsolve(equations, (1,1,1,1)) I keep getting the error that "x" is not defined. I am executing all commands at the command-line, since I have no idea how to run a batch of commands as above automatically in python. I welcome any advice on how to solve this.

    Read the article

  • Ibatis startBatch() only works with SqlMapClient's own start and commit transactions, not with Sprin

    - by Brian
    Hi, I'm finding that even though I have code wrapped by Spring transactions, and it commits/rolls back when I would expect, in order to make use of JDBC batching when using Ibatis and Spring I need to use explicit SqlMapClient transaction methods. I.e. this does batching as I'd expect: dao.getSqlMapClient().startTransaction(); dao.getSqlMapClient().startBatch(); int i = 0; for (MyObject obj : allObjects) { dao.storeChange(obj); i++; if (i % DB_BATCH_SIZE == 0) { dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().startBatch(); } } dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().commitTransaction(); but if I don't have the opening and closing transaction statements, and rely on Spring to manage things (which is what I want to do!), batching just doesn't happen. Given that Spring does otherwise seem to be handling its side of the bargain regarding transaction management, can anyone advise on any known issues here? (Database is MySQL; I'm aware of the issues regarding its JDBC pseudo-batch approach with INSERT statement rewriting, that's definitely not an issue here)

    Read the article

  • What are some funny loading statements to keep users amused?

    - by Oli
    Nobody likes waiting but unfortunately in the Ajax application I'm working on at the moment, there is one fair-sized pause (1-2 seconds a go) that users have to undergo each and every time they want to load up a chunk of data. I've tried to make the load as interactive as possible. There's an animated GIF alongside a very plain, very dull "Loading..." message. So I thought it might be quite fun to come up with a batch of 50-or-so funny-looking messages and pick from them randomly so the user never knows what they're going to see. The time they would have spent growing impatient is fruitfully used. Here's what I've come up with so far, just to give you an idea. var randomLoadingMessage = function() { var lines = new Array( "Locating the required gigapixels to render...", "Spinning up the hamster...", "Shovelling coal into the server...", "Programming the flux capacitor" ); return lines[Math.round(Math.random()*(lines.length-1))]; } (Yes -- I know some of those are pretty lame -- That's why I'm here :) The funniest I see today will get the prestigious "Accepted Answer" award. Others get votes for participation. Enjoy!!

    Read the article

  • How to create reactive tasks for programming competitions?

    - by directx
    A reactive task is sometimes seen in the IOI programming competition. Unlike batch tasks, reactive solutions take input from another program as well as outputting it. The program typically 'query' the judge program a certain number of times, then output a final answer. An example The client program accepts lines one by one, and simply echoes it back. When it encountered a line with "done", it exists immediately. The client program in Java looks like this: import java.util.*; class Main{ public static void main (String[] args){ Scanner in = new Scanner(System.in); String s; while (!(s=in.nextLine()).equals("done")) System.out.println(s); } } The judge program gives the input and processes output from the client program. In this example, it feeds it a predefined input and checks if the client program has echoed it back correctly. A session might go like this: Judge Client ------------------ Hello Hello World World done I'm having trouble writing the judge program and having it judge the client program. I'd appreciate if someone could write a judge program for my example.

    Read the article

  • Use $_FILES on a page called by .ajax

    - by RachelD
    I have two .php pages that I'm working with. Index.php has a file upload form that posts back to index.php. I can access the $_FILES no problem on index.php after submitting the form. My issue is that I want (after the form submit and the page loads) to use .ajax (jQuery) to call another .php file so that file can open and process some of the rows and return the results to ajax. The ajax then displays the results and recursively calls itself to process the next batch of rows. Basically I want to process (put in the DB etc) the csv in chunks and display it for the user in between chunks. Im doing it this way because the files are 400,000+ rows and the user doesnt want to wait the 10+ min for them all to be processed. I dont want to move this file (save it) because I just need to process it and throw it away and if a user closes the page while its processing the file wont be thrown away. I could cron script it but I dont want to. What I would really like to do is pass the (single) $_FILES through .ajax OR Save it in a $_POST or $_SESSION to use on the second page. Is there any hope for my cause? Heres the ajax code if that helps: function processCSV(startIndex, length) { $.ajax({ url: "ajax-targets/process-csv.php", dataType: "json", type: "POST", data: { startIndex: startIndex, length: length }, timeout: 60000, // 1000 = 1 sec success: function(data) { // JQuery to display the rows from the CSV var newStart = startIndex+length; if(newStart <= data['csvNumRows']) { processCSV(newStart, length); } } }); } processCSV(1, 2); }); P.S. I did try this Passing $_FILES or $_POST to a new page with PHP but its not working for me :( SOS.

    Read the article

  • Installing and using acts-as-taggable-on

    - by seaneshbaugh
    This is going to be a really dumb question, I just know it, but I'm going to ask anyways because it's driving me crazy. How do I get acts-as-taggable-on to work? I installed it as a gem with gem install acts-as-taggable-on because I can't ever seem to get installing plugins to work, but that's a whole other batch of questions that are all probably really dumb. Anyways, no problems there, it installed correctly. I did ruby script/generate acts_as_taggable_on_migration and rake db:migrate, again no problems. I added acts_as_taggable to the model I want to use tags with, started up the server and then loaded the index for the model just to see if what I've got so far is working and got the following error: undefined local variable or method `acts_as_taggable' for #. I figure that just means I need to do something like require 'acts-as-taggable-on' to my model's file because that's typically what's necessary for gems. So I did that hit refresh and got uninitialized constant ActiveRecord::VERSION. I'm not even going to pretend to begin to know what that means went wrong. Did I go wrong somewhere or there something else I need to do. The installation instructions seem to me like they just assume you generally know what you're doing and don't even begin to explain what to do when things go wrong.

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • Neo4j Reading data / performing shortest path calculations on stored data

    - by paddydub
    I'm using the Batch_Insert example to insert Data into the database How can i read this data back from the database. I can't find any examples of how i do this. public static void CreateData() { // create the batch inserter BatchInserter inserter = new BatchInserterImpl( "var/graphdb", BatchInserterImpl.loadProperties( "var/neo4j.props" ) ); Map<String,Object> properties = new HashMap<String,Object>(); properties.put( "name", "Mr. Andersson" ); properties.put( "age", 29 ); long node1 = inserter.createNode( properties ); properties.put( "name", "Trinity" ); properties.remove( "age" ); long node2 = inserter.createNode( properties ); inserter.createRelationship( node1, node2, DynamicRelationshipType.withName( "KNOWS" ), null ); inserter.shutdown(); } I would like to store graph data in the database, graph.makeEdge( "s", "c", "cost", (double) 7 ); graph.makeEdge( "c", "e", "cost", (double) 7 ); graph.makeEdge( "s", "a", "cost", (double) 2 ); graph.makeEdge( "a", "b", "cost", (double) 7 ); graph.makeEdge( "b", "e", "cost", (double) 2 ); Dijkstra<Double> dijkstra = getDijkstra( graph, 0.0, "s", "e" ); What is the best method to store this kind data with 10000's of edges. Then run the Dijskra algorighm to find shortest path calculations using the stored graph data.

    Read the article

  • Appending data to NSFetchedResultsController during find or create loop

    - by Justin Williams
    I have a table view that is managed by an NSFetchedResultsController. I am having an issue with a find-or-create operation, however. When the user hits the bottom of my table view, I am querying my server for another batch of content. If it doesn't exist in the local cache, we create it and store it. If it does exist, however, I want to append that data to the fetched results controller and display it. I can't quite figure that part out. Here's what I'm doing thus far: Passing the returned array of values from my server to an NSOperation to process. In the operation, create a new managed object context to work with. In the operation, I iterate through the array and execute a fetch request to see if the object exists (based on its server id). If the object doesn't exist, we create it and insert it into the operations' managed object context. After the iteration completes, we save the managed object context, which triggers a merge notification on my main thread. At this point, any objects that weren't locally cached in my Core Data store before will appear, but the ones that previously existed do not come along for the ride. I feel like it's something simple I'm missing, and could use a nudge in the right direction.

    Read the article

  • How to target multiple versions of .NET Framework from MSBuild?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/sln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software. I notice that a lot of people use NAnt (e.g. Ninject builds many targets with NAnt) but I'm unsure if this is necessary or if they are just more familiar with it. I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • kick off a map reduce job from my java/mysql webapp

    - by Brian
    Hi guys, I need a bit of archecture advice. I have a java based webapp, with a JPA based ORM backed onto a mysql relational database. Now, as part of the application I have a batch job that compares thousands of database records with each other. This job has become too time consuming and needs to be parallelized. I'm looking at using mapreduce and hadoop in order to do this. However, I'm not too sure about how to integrate this into my current architecture. I think the easiest initial solution is to find a way to push data from mysql into hadoop jobs. I have done some initial research on this and found the following relevant information and possibilities: 1) https://issues.apache.org/jira/browse/HADOOP-2536 this gives an interesting overview of some inbuilt JDBC support 2) This article http://architects.dzone.com/articles/tools-moving-sql-database describes some third party tools to move data from mysql to hadoop. To be honest I'm just starting out with learning about hbase and hadoop but I really don't know how to integrate this into my webapp. Any advice is greatly appreciated. cheers, Brian

    Read the article

  • Finding missing symbols in libstd++ on Debian/squeeze

    - by Florian Le Goff
    I'm trying to use a pre-compiled library provided as a .so file. This file is dynamically linked against a few librairies : $ ldd /usr/local/test/lib/libtest.so linux-gate.so.1 = (0xb770d000) libstdc++-libc6.1-1.so.2 = not found libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb75e1000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7499000) /lib/ld-linux.so.2 (0xb770e000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb747c000) Unfortunately, in Debian/squeeze, there is no libstdc++-libc6.1-1.so.* file. Only a libstdc++.so.* file provided by the libstdc++6 package. I tried to link (using ln -s) libstdc++-libc6.1-1.so.2 to the libstdc++.so.6 file. It does not work, a batch of symbols seems to be lacking when I'm trying to ld my .o files with this lib. /usr/local/test/lib/libtest.so: undefined reference to `__builtin_vec_delete' /usr/local/test/lib/libtest.so: undefined reference to `istrstream::istrstream(int, char const *, int)' /usr/local/test/lib/libtest.so: undefined reference to `__rtti_user' /usr/local/test/lib/libtest.so: undefined reference to `__builtin_new' /usr/local/test/lib/libtest.so: undefined reference to `istream::ignore(int, int)' What would you do ? How may I find in which lib those symbols are exported ?

    Read the article

  • Ruby Gem Install question + answer(on windows vista Home Basic environment)

    - by Vamsi
    Recently I am having problems with installing rcov gem on my windows (vista Home Basic environment), so after googling I found one solution and that is gem install rcov -v 0.8.1.1.0 #version that installs without errors gem update rcov #update to the latest version, in my case rcov-0.8.1.2.0-x86-mswin32 But this solution didn't worked on my colleague's system (windows xp) and after that we came to know about RubyInstaller devkit for winddows But that dev kit is not working on my vista, when I tried gem install rcov in my command prompt, it game me this error, C:\Users\Vamsi>gem install rcov Building native extensions. This could take a while... ERROR: Error installing rcov: ERROR: Failed to build gem native extension.ERROR: Failed to build gem native extension. D:/Spritle/Programs/Ruby/bin/ruby.exe extconf.rb creating Makefile nmake 'nmake' is not recognized as an internal or external command, operable program or batch file. Gem files will remain installed in D:/Spritle/Programs/Ruby/lib/ruby/gems/1.8/ge ms/rcov-0.9.8 for inspection. Results logged to D:/Spritle/Programs/Ruby/lib/ruby/gems/1.8/gems/rcov-0.9.8/ext /rcovrt/gem_make.out So after that my colleague tried to install nmake as well but it was throwing some other error. Can some one suggest a better solution for solving this problems for all windows environments? I am aware of cygwin for windows but I am not sure that is an 100% solution either.

    Read the article

  • JMX - MBean automated registration on application deployment

    - by Gadi
    Hi All, I need some direction with JMX and J2EE. I am aware (after few weeks of research) that the JMX specification is missing as far as deployment is concerned. There are few vendor specific implementations for what I am looking for but none are cross vendor. I would like to automate the deployment of MBeans and registration with the Server. I need the server to load and register my MBeand when the application is deployed and remove when the application is un-deployed. I develop with: NetBean 6.7.1, GlassFish 2.1, J2EE5, EJB3 More specific, I need a way to manage timer service runs. My application need to run different archiving agents and batch reporting. I was hoping the JMX will give me remote access to create and manage the timer services and enable the user to create his own schedule. If the JMX is auto registered on application deployment the user can immediately connect and manage the schedule. On the other hand, how can an EJB connect/access an MBean? Many thanks in advance. Gadi.

    Read the article

  • Flash CS4 compiler Error 1120 when embedding pngs into class instance variables.

    - by theolagendijk
    I have a Flash CS4 (Flash 9 ActionScript 3.0) project that compiles and runs perfectly on my machine. However it is part of a big batch of fla's that I want to compile on another (faster) machine. When I copy the project (the fla and all actionscripts and assets files) to the faster machine, it's Flash CS4 compiler gives me compiler error 1120 "Access of undefined property ButtonPause_PauseNormal". The property "PauseNormal" is an embedded png. The PNG is available. No transcoder errors. Here's the ActionScript for class "ButtonPause"; package nl.platipus.NissanESM.buttons { import flash.display.*; import flash.events.*; public class ButtonPause extends Sprite { [Embed(source="../../../../player/pause.png")] private var PauseNormal:Class; [Embed(source="../../../../player/pause_mo.png")] private var PauseMouseOver:Class; private var stateNormal:Bitmap; private var stateMouseOver:Bitmap; public function ButtonPause() { stateNormal = new PauseNormal(); stateNormal.width = 29; stateNormal.height = 14; stateNormal.alpha = 1; addChild(stateNormal); stateMouseOver = new PauseMouseOver(); stateMouseOver.width = 29; stateMouseOver.height = 14; stateMouseOver.alpha = 0; addChild(stateMouseOver); width = 29; height = 14; addEventListener(MouseEvent.MOUSE_OVER, handleMouseOver); addEventListener(MouseEvent.MOUSE_OUT, handleMouseOut ); } private function handleMouseOver(evt:MouseEvent):void { stateNormal.alpha = 0; stateMouseOver.alpha = 1; } private function handleMouseOut(evt:MouseEvent):void { stateNormal.alpha = 1; stateMouseOver.alpha = 0; } } } (Both machines run the exact same Flash CS4 Profesional Version 10.0.2 installation and both have the exact same publish settings and ActionScript 3.0 settings.) What's going on?

    Read the article

  • Tool for generating flat files from SQL objects dynamically

    - by Fabio Gouw
    Hello! I'm looking for a tool or component that generates flat files given a SQL Server's query result (from a stored procedure or a SELECT * over a table or view). This will be a batch process which runs every day, and every day a new file is created. I could use SQL Server Integration Services (DTS), but I have a mandatory requirement: the output of the file need to be dynamic. If a new column is added in my query result, the file must have this new column too, without having to modify my SSIS package. If a column is removed, then the flat file no longer will have it. I’ve tried to do this with SSIS, but when I create a new package I need to specify the number of columns. Another requirement is configuring the format of the output, depending on the data type of the column. If it’s a datetime, the format needs to be YYYY-MM-DD. If it’s a float, then I need to use 2 decimal digits, and so on. Does anyone know a tool that does this job? Thanks

    Read the article

  • SQL Server Blocking Issue

    - by Robin Weston
    We currently have an issue that occurs roughly once a day on SQL 2005 database server, although the time it happens is not consistent. Basically, the database grinds to a halt, and starts refusing connections with the following error message. This includes logging into SSMS: A connection was successfully established with the server, but then an error occurred during the login process. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) Our CPU usage for SQL is usually around 15%, but when the DB is in it's broken state it's around 70%, so it's clearly doing something, even if no-one can connect. Even if I disable the web app that uses the database the CPU still doesn't go down. I am unable to restart the SQLSERVER process as it is unresponsive, so I have to end up killing the process manually, which then puts the DB into Suspect/Recovery mode (which I can fix but it's a pain). Below are some PerfMon stats I gathered when the DB was in it's broken state which might help. I have a bunch more if people want to request them: Active Transactions: 2 (Never Changes) Logical Connections: 34 (NC) Process Blocked: 16 (NC) User Connections: 30 (NC) Batch Request: 0 (NC) Active Jobs: 2 (NC) Log Truncations: 596 (NC) Log Shrinks: 24 (NC) Longest Running Transaction Time: 99 (NC) I guess they key is finding out what the DB is using it's CPU on, but as I can't even log into SSMS this isn't possible with the standard methods. Disturbingly, I can't even use the dedicated admin connection to get into SSMS. I get the same timout as with all other requests. Any advice, reccomendations, or even sympathy, is much appreciated!

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >