Search Results

Search found 1864 results on 75 pages for 'dump'.

Page 22/75 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How do I install LFE on Ubuntu Karmic?

    - by karlthorwald
    Erlang was already installed: $dpkg -l|grep erlang ii erlang 1:13.b.3-dfsg-2ubuntu2 Concurrent, real-time, distributed function ii erlang-appmon 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application monitor ii erlang-asn1 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP modules for ASN.1 support ii erlang-base 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP virtual machine and base applica ii erlang-common-test 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for automated testin ii erlang-debugger 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for debugging and te ii erlang-dev 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP development libraries and header [... many more] Erlang seems to work: $ erl Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false] Eshell V5.7.4 (abort with ^G) 1> I downloaded lfe from github and checked out 0.5.2: git clone http://github.com/rvirding/lfe.git cd lfe git checkout -b local0.5.2 e207eb2cad $ configure configure: command not found $ make mkdir -p ebin erlc -I include -o ebin -W0 -Ddebug +debug_info src/*.erl #erl -I -pa ebin -noshell -eval -noshell -run edoc file src/leex.erl -run init stop #erl -I -pa ebin -noshell -eval -noshell -run edoc_run application "'Leex'" '"."' '[no_packages]' #mv src/*.html doc/ Must be something stupid i missed :o $ sudo make install make: *** No rule to make target `install'. Stop. $ erl -noshell -noinput -s lfe_boot start {"init terminating in do_boot",{undef,[{lfe_boot,start,[]},{init,start_it,1},{init,start_em,1}]}} Crash dump was written to: erl_crash.dump init terminating in do_boot () Is there an example how I would create a hello world source file and compile and run it?

    Read the article

  • How to change TestNG dataProvider order

    - by momad
    Hi, I am running hundreds of tests against a large publishing system and would like to paralellize the tests using TestNG. However, I cannot find any easy way of doing this. Each test case instanciates an instance of this publisher, send some messages, wait for those messages to be published, then dump out the contents of the publish queues and compare against expected outcome. Doing this with so many tests (even if I paralellize using threads, still takes a very long time to complete (1 day or more)). We've found that in testing this sort of system, it's best to start up system once, run all tests to send their messages, wait for publish to do its thing, dump all outputs, and match outputs with tests and verify. For example, instead of the following: @Test public void testRule1() { Publisher pub = new Publisher(); pub.sendRule(new Rule("test1-a")); sleep(10); // wait 10 seconds pub.dumpRules(); verifyRule("test1-a"); } We wanted to do something like the following: @Test public void testRule1(bool sendMode) { if(sendMode) { this.pub.sendRule(new Rule("test1-a")); } else { verifyRule("test1-a"); } } Where you have a dataProvider run through all the tests with sendMode = true and then perform dumpAllRules() followed by running through all of the tests again with sendMode = false. The problem is, TestNG calls the same method twice, once with sendMode = true followed by sendMode = false. Is there anyway to accomplish this in TestNG? Thanks!

    Read the article

  • Rails - difference between config.cache_store and config.action_controller.cache_store?

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • How to keep character encoding with database queries.

    - by JasonS
    Hi, I am doing the following. 1) I am exporting a database and saving it to a file called dump.sql. 2) The file is then transferred to a different server via PHP ftp. 3) When the file has been successfully transferred the administrator has an option to run a 'dbtransfer' script on the new host. 4) This script blows up the script and runs the queries line by line. This works great - however there is a problem with foreign language encoding. We are using UTF-8. Step 1 : This works fine, file is in UTF-8 Format. Step 3 : When I test the contents of the dump.sql file using mb_check_encoding(). The string comes back as UTF-8. Step 4 : This creates tables with utf8_general_ci encoding. The information is dumped in. When I check the table after the transfer I get records like this: 'ç,Ç,ö,Ö,ü,Ü,ı,İ,ş,Ş,ğ,Ğ'. I don't understand how a UTF-8 string can lose its encoding when it goes into the database. Am I missing a step? Do I need to run some sort of function to ensure the string is parsed as UTF-8? Once the system is installed I can save foreign language queries. It is just the transfer that is messing up. Any ideas?

    Read the article

  • Firefox extension is freezing Firefox until request is completed

    - by Michael
    For some reason the function is freezing along with firefox until it fully retrieve the stream from requested site. Is there any mechanism to prevent freezing, so it works as expected? in XUL <statusbarpanel id="eee_label" tooltip="eee_tooltip" onclick="eee.retrieve_rate(event);"/> Javascript retrieve_rate: function(e) { var ajax = null; ajax = new XMLHttpRequest(); ajax.open('GET', 'http://site.com', false); ajax.onload = function() { if (ajax.status == 200) { var regexp = /blabla/g; var match = regexp.exec(ajax.responseText); while (match != null) { window.dump('Currency: ' + match[1] + ', Rate: ' + match[2] + ', Change: ' + match[3] + "\n"); if(match[1] == "USD") rate_USD = sprintf("%s:%s", match[1], match[2]); if(match[1] == "EUR") rate_EUR = sprintf("%s:%s", match[1], match[2]); if(match[1] == "RUB") rate_RUB = sprintf("%s/%s", match[1], match[2]); match = regexp.exec(ajax.responseText); } var rate = document.getElementById('eee_label'); rate.label = rate_USD + " " + rate_EUR + " " + rate_RUB; } else { } }; ajax.send(); I tried to put window.dump() right after ajax.send() and it dumped in the console also after the request is completed.

    Read the article

  • x86 CMP Instruction Difference

    - by Pindatjuh
    Question What is the (non-trivial) difference between the following two x86 instructions? 39 /r CMP r/m32,r32 Compare r32 with r/m32 3B /r CMP r32,r/m32 Compare r/m32 with r32 Background I'm building a Java assembler, which will be used by my compiler's intermediate language to produce Windows-32 executables. Currently I have following code: final ModelBase mb = new ModelBase(); // create new memory model mb.addCode(new Compare(Register.ECX, Register.EAX)); // add code mb.addCode(new Compare(Register.EAX, Register.ECX)); // add code final FileOutputStream fos = new FileOutputStream(new File("test.exe")); mb.writeToFile(fos); fos.close(); To output a valid executable file, which contains two CMP instruction in a TEXT-section. The executable outputted to "text.exe" will do nothing interesting, but that's not the point. The class Compare is a wrapper around the CMP instruction. The above code produces (inspecting with OllyDbg): Address Hex dump Command 0040101F |. 3BC8 CMP ECX,EAX 00401021 |. 3BC1 CMP EAX,ECX The difference is subtle: if I use the 39 byte-opcode: Address Hex dump Command 0040101F |. 39C1 CMP ECX,EAX 00401021 |. 39C8 CMP EAX,ECX Which makes me wonder about their synonymity and why this even exists.

    Read the article

  • What is the suggested approach to Syncing/Backing up/Restoring from SQL Server 2008 to SQL Server 20

    - by Eoin Campbell
    I only have SQL Server 2008 (Dev Edition) on my development machine I only have SQL Server 2005 available with my hosting company (and I don't have direct connection access to this database) I'm just wondering what the best approach is for: Getting the initlal DB Structure & Data into production. And keeping any structural changes/data changes in sync in future. As far as I can see... Replication - not an option cos I can't connect to the production DB. Restoring a backup - not an option because as far as I can see, you cannot export a DB from 2008 that is restorable in 2005 (even with the 2008 DB set in 2005 compatibility mode) and it wouldn't make sense to be restoring production over the top of my dev version anyway. Dump all the scripts from my 2008 Database, Revert my Dev to machine from 2008 - 2005, and recreate the database from the scripts, then just use backup & restore to get the initial DB into production, then run scripts through the web panel from that point onwards Dump all the scripts from my 2008 Database and generate the entire 2005 db from scripts in production. then run scripts through the web panel from that point onwards With the last 2 options, I'd probably need to script all the data inserts as well using some tool (which I presume exists on the web) Are there any other possibile solutions that I'm not considering.

    Read the article

  • Munging non-printable characters to dots using string.translate()

    - by Jim Dennis
    So I've done this before and it's a surprising ugly bit of code for such a seemingly simple task. The goal is to translate any non-printable character into a . (dot). For my purposes "printable" does exclude the last few characters from string.printable (new-lines, tabs, and so on). This is for printing things like the old MS-DOS debug "hex dump" format ... or anything similar to that (where additional whitespace will mangle the intended dump layout). I know I can use string.translate() and, to use that, I need a translation table. So I use string.maketrans() for that. Here's the best I could come up with: filter = string.maketrans( string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]), '.'*len(string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]))) ... which is an unreadable mess (though it does work). From there you can call use something like: for each_line in sometext: print string.translate(each_line, filter) ... and be happy. (So long as you don't look under the hood). Now it is more readable if I break that horrid expression into separate statements: ascii = string.maketrans('','') # The whole ASCII character set nonprintable = string.translate(ascii, ascii, string.printable[:-5]) # Optional delchars argument filter = string.maketrans(nonprintable, '.' * len(nonprintable)) And it's tempting to do that just for legibility. However, I keep thinking there has to be a more elegant way to express this!

    Read the article

  • help with Firefox extension

    - by Johnny Grass
    I'm writing a Firefox extension that creates a socket server which will output the active tab's URL when a client makes a connection to it. I have the following code in my javascript file: var serverSocket; function startServer() { var listener = { onSocketAccepted : function(socket, transport) { try { var outputString = gBrowser.currentURI.spec + "\n"; var stream = transport.openOutputStream(0,0,0); stream.write(outputString,outputString.length); stream.close(); } catch(ex2){ dump("::"+ex2); } }, onStopListening : function(socket, status){} }; try { serverSocket = Components.classes["@mozilla.org/network/server-socket;1"] .createInstance(Components.interfaces.nsIServerSocket); serverSocket.init(7055,true,-1); serverSocket.asyncListen(listener); } catch(ex){ dump(ex); } document.getElementById("status").value = "Started"; } startServer(); As it is, it works for multiple tabs in a single window. If I open multiple windows, it ignores the additional windows. I think it is creating a server socket for each window, but since they are using the same port, the additional sockets fail to initialize. I need it to create a server socket when the browser launches and continue running when I close the windows (Mac OS X). As it is, when I close a window but Firefox remains running, the socket closes and I have to restart firefox to get it up an running. How do I go about that?

    Read the article

  • Raycasting with tags problems in unity3d

    - by user1855858
    i need some help here. i have a part of code to search unblocked neighbour with raycast. i need to get raycast that just collide with "WP" tag. both of the iteration shown a right results, so do the dump and the raycast, the raycast does success to collide something, but when i check what the raycast collided with, there is no result shown.... anyone knows whats wrong with this code..?? int flag = 0, flahNeigh = 0; for(flag = 0; flag< wayPoints.WPList.Length; flag++) // iteration to seek neighbour nodes { for (flagNeigh = 0; flagNeigh < wayPoints.WPList.Length; flagNeigh++) { if (wayPoints.WPList[flag].loc != wayPoints.WPList[flagNeigh].loc) // dump its own node location { if (Physics.Raycast(wayPoints.WPList[flag].loc.position, wayPoints.WPList[flagNeigh].loc.position, out hitted)) // raycasting to each node else its self { if (hitted.collider.gameObject.CompareTag("WP")) // check if the ray only collide the node { print(flag + " : " + flagNeigh + " : " + wayPoints.WPList[flagNeigh].loc.position); // debugging to see whether the code works or not (the error comes) } } } } } thanks for the appreciation and answers... sorry if i have a bad english...^^

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • equality on the sender of an event

    - by Berryl
    I have an interface for a UI widget, two of which are attributes of a presenter. public IMatrixWidget NonProjectActivityMatrix { set { // validate the incoming value and set the field _nonProjectActivityMatrix = value; .... // configure & load non-project activities } public IMatrixWidget ProjectActivityMatrix { set { // validate the incoming value and set the field _projectActivityMatrix = value; .... // configure & load project activities } The widget has an event that both presenter objects subscribe to, and so there is an event handler in the presenter like so: public void OnActivityEntry(object sender, EntryChangedEventArgs e) { // calculate newTotal here .... if (ReferenceEquals(sender, _nonProjectActivityMatrix)) { _nonProjectActivityMatrix.UpdateTotalHours(feedback.ActivityTotal); } else if (ReferenceEquals(sender, _projectActivityMatrix)) { _projectActivityMatrix.UpdateTotalHours(feedback.ActivityTotal); } else { // ERROR - we should never be here } } The problem is that the ReferenceEquals on the sender fails, even though it is the implemented widget that is the sender - the same implemented widget that was set to the presenter attribute! Can anyone spot what the problem / fix is? Cheers, Berryl I didn't know you could edit nicely. Cool. Here is the event raising code: void OnGridViewNumericUpDownEditingControl_ValueChanged(object sender, EventArgs e) { // omitted to save sapce if (EntryChanged == null) return; var args = new EntryChangedEventArgs(activityID, dayID, Convert.ToDouble(amount)); EntryChanged(this, args); } Here is the debugger dump of the presenter attribute, sans namespace info: ?_nonProjectActivityMatrix {WinPresentation.Widgets.MatrixWidgetDgv} [WinPresentation.Widgets.MatrixWidgetDgv]: {WinPresentation.Widgets.MatrixWidgetDgv} Here is the debugger dump of the sender: ?sender {WinPresentation.Widgets.MatrixWidgetDgv} base {Core.GUI.Widgets.Lookup.MatrixWidgetBase<Core.GUI.Widgets.Lookup.DynamicDisplayDto>}: {WinPresentation.Widgets.MatrixWidgetDgv} _configuration: {Domain.Presentation.Timesheet.Matrix.WeeklyMatrixConfiguration} _wrappedWidget: {Win.Widgets.DataGridViewDynamicLookupWidget} AllowUserToAddRows: true ColumnCount: 11 Count: 4 EntryChanged: {Method = {Void OnActivityEntry(System.Object, Smack.ConstructionAdmin.Domain.Presentation.Timesheet.Matrix.EntryChangedEventArgs)}} SelectedCell: {DataGridViewNumericUpDownCell { ColumnIndex=3, RowIndex=3 }} SelectedCellValue: "0.00" SelectedColumn: {DataGridViewNumericUpDownColumn { Name=MONDAY, Index=3 }} SelectedItem: {'AdministrativeActivity: 130-04', , AdministrativeTime, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00} Berryl

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Using JQuery to traverse DOM structure, finding a specific <table> element located after HTML 'comme

    - by Shadow
    I currently have a website source code (no control over the source) which contains certain content that needs to be manipulated. This would be simple on the surface, however there is no unique ID attribute on the tag in question that can uniquely identify it, and therefore allow for further traversal. Here is a snippet of the source code, surrounding the tag in question. ... <td width="100%"> <!--This table snaps the content columns(one or two)--> <table border="0" width="100%"> ... Essentially, the HTML comment stuck out as an easy way to gain access to that element. Using the JQuery comment add-on from this question, and some help from snowlord comment below, I have been able to identify the comment and retrieve the following output using the 'dump' extension. $('td').comments().filter(":contains('This table snaps the content columns(one or two)')").dump(); returns; jQuery Object { 0 = DOMElement [ nodeName: DIV nodeValue: null innerHTML: [ 0 = String: This table snaps the content columns(one or two) ] ] } However I am not sure how to traverse to the sibling element in the DOM. This should be simple, but I haven't had much selector experience with JQuery. Any suggestions are appreciated.

    Read the article

  • Cryptography: best practices for keys in memory?

    - by Johan
    Background: I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory. So, in order to do this, the app is required to have the key stored in memory. The question is, is there any good best practices for this? Securing the key in memory. A few ideas: Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2)?) Splitting the key over multiple memory locations. Encrypting the key. With what, and how to keep the...key key.. secure? Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too) Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage; The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices? Thanks for any input!

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Ubuntu 10.04: Unable to Start RabbitMQ Server Post-Installation

    - by Garland W. Binns
    After installing RabbitMQ on Ubuntu 10.04 I receive a failure message that the service was unable to start. Any insight into the issue would be greatly appreciated! Below are contents of startup_log and startup_err. Startup_log: {error_logger,{{2012,7,7},{15,50,31}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,etimedout}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,7,7},{15,50,31}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0]},{messages,[]},{links,[#Port<0.100,<0.17.0]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,512}],[]]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[[rabbitmqprelaunch877,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,7,7},{15,50,31}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Startup_err: Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • c++ figuring out memory layout of members programatically

    - by anon
    Suppose in one program, I'm given: class Foo { int x; double y; char z; }; class Bar { Foo f1; int t; Foo f2; }; int main() { Bar b; bar.f1.z = 'h'; bar.f2.z = 'w'; ... some crap setting value of b; FILE *f = fopen("dump", "wb"); // c-style file fwrite(&b, sizeof(Bar), 1, f); } Suppose in another program, I have: int main() { File *f = fopen("dump", "rb"); std::string Foo = "int x; double y; char z;"; std::string Bar = "Foo f1; int t; Foo f2;"; // now, given this is it possible to read out // the value of bar.f1.z and bar.f2.z set earlier? } WHat I'm asking is: given I have the types of a class, can I figure out how C++ lays it out?

    Read the article

  • Filtering null values with pig

    - by arianp
    It looks like a silly problem, but I can´t find a way to filter null values from my rows. This is the result when I dump the object geoinfo: DUMP geoinfo; ([longitude#70.95853,latitude#30.9773]) ([longitude#-9.37944507,latitude#38.91780853]) (null) (null) (null) ([longitude#-92.64416,latitude#16.73326]) (null) (null) ([longitude#-9.15199849,latitude#38.71179122]) ([longitude#-9.15210796,latitude#38.71195131]) here is the description DESCRIBE geoinfo; geoinfo: {geoLocation: bytearray} What I'm trying to do is to filter null values like this: geoinfo_no_nulls = FILTER geoinfo BY geoLocation is not null; but the result remains the same. nothing is filtered. I also tried something like this geoinfo_no_nulls = FILTER geoinfo BY geoLocation != 'null'; and I got an error org.apache.pig.backend.executionengine.ExecException: ERROR 1071: Cannot convert a map to a String What am I doing wrong? details, running on ubuntu, hadoop-1.0.3 with pig 0.9.3 pig -version Apache Pig version 0.9.3-SNAPSHOT (rexported) compiled Oct 24 2012, 19:04:03 java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)

    Read the article

  • object references an unsaved transient instance

    - by developer
    Hi, I have 2 tables, user and userprofile, both with almost identical fields. user table references userprofile table by primary key ID. My requirement is that on click of a button I need to dump user table record to userprofile table. Now for a particular user table, if there is a corresponding userprofile entry, I am successfully able to dump the data, but if there is no record in userprofile table then I need to create a new record by dumping all the data. My problem is that I am able to update the data when the record is present in userprofile table, but in the case wherein I have to create a new record I get the below error "object references an unsaved transient instance - save the transient instance before flushing". `<class name="User"> <id name="ID" type="Int32"> <generator class="native" /> </id> <many-to-one name="Pid" class="UserProfile" /> </class>` UserProfile is another table and Pid above references the Primary key ID of UserProfile table.

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >