Search Results

Search found 17254 results on 691 pages for 'tool advice'.

Page 573/691 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • I'm having an issue to use GLshort for representing Vertex, and Normal.

    - by Xylopia
    As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering. Eventually, I've dearly searched around and have found following advices from stackoverflow. Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array How do you represent a normal or texture coordinate using GLshorts? Advice on speeding up OpenGL ES 1.1 on the iPhone Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Natural remedy for this is to multiply the normals w/ scale before you submit to GPU. Then, my short normals almost become 0, because the scale factor is usually smaller than 0. Duh! How do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far. Your help would be truly appreciated.

    Read the article

  • My first MVVM application architecture setup

    - by 1110
    Ok, time is coming for my first WPF project :). I work before with Flex and PureMVC and I know how project setup is important in RIA's. I decided to work with MVVM. And decided to work with PRISM framework. Application is somethin like operating system. There will be 'shell' (parent for smaller applications). Smaller application I plan to make like modules. So I plan to design structure of project something like this. Module_A {view, viewModel, model, assets} // for example calculator Module_B {view, viewModel, model, assets} // notebook etc I read prism doc and I see that parrent for all this modules should be shell project, and this is my main question here. Parrent_Project {App.xaml, Bootstrapper.cs, Shell.xaml} Because this shell will be fullscreen with background images (like operating system), right click with some features. Is that ok to create folder structure like in modulesXYZ for Shell.xaml here? I want to start project with good structure so any advice is welcome. Thanks

    Read the article

  • d3: Coloring Multiple Lines from Nested Data

    - by diet_coke
    I'm currently assembling some line graphs with circles at the datapoints from arrays of JSON objects formatted like so: var data = [{ "name": "metric1", "datapoints": [ [10.0, 1333519140], [48.0, 1333519200] ] }, { "name": "metric2", "datapoints": [ [48.0, 1333519200], [12.0, 1333519260] ] }] I want to have a color for each metric, so I'm trying to color them based on the index of the object within the array data. The code I have currently for just placing the circles looks like: // We bind an svg group to each metric. var metric_groups = this.vis.selectAll("g.metric_group") .data(data).enter() .append("g") .attr("class", "metric_group"); // Then bind a circle for each datapoint. var circles = metric_groups.selectAll("circle") .data(function(d) { return d.datapoints; }); circles.enter().append("circle") .attr("r", 3.5); Now if I change that last bit to something like: circles.enter().append("circle") .attr("r", 3.5); .style("fill", function(d,i) { return i%2 ? "red" : "blue"; } I get alternating red and blue circles, as could be expected. Taking some advice from Nested Selections : 'Nesting and Index', I tried: circles.enter().append("circle") .attr("r", 3.5); .style("fill", function(d,i,j) { return j%2 ? "red" : "blue"; } Which doesn't work (j is undefined), presumably because we are in the named property datapoints, rather than an array element. How might I go about doing the coloring that I want without changing my data structure? Thanks!

    Read the article

  • ActiveX won't run from server

    - by user1709555
    I have a MFC activeX that runs fine from disk but when I put it on a server I get errors. Client: WIN7 machine Server: Ubunto running apache The HTML and the errors are below, please advice. 10xs, Nahum HTML: <html> <HEAD> <TITLE>myFirstOCX.CAB</TITLE> <script type="text/javascript" FOR="window"> function fn() { try{ document.all('Ctrl1').AboutBox();//error: Undifiend : object doesn't have AboutBox() method //OR var obj = new ActiveXObject ("activex.activexCtrl"); obj.AboutBox ();//error: Undifiend : Automation server can't create object } catch (ex) { alert("Error: " + ex.Description + " : " + ex.message); } } </script> </HEAD> <body bgcolor=lightblue > <TABLE BORDER> <TR> <TD><OBJECT CLASSID="CLSID:E228C560-FA68-48E6-850F-B1167515C920" CODEBASE=".\nsip.CAB#version=1,0,0,1" ID="Ctrl1" name="Ctrl1"> </OBJECT> </TD> </TR> <TR> <TD ALIGN="CENTER"> <INPUT TYPE=BUTTON VALUE="Click Me" onclick="fn()" > </TD> </TR> </TABLE> <INPUT TYPE=TEXT ID="ConnectionString" VALUE="" > </body> </html>

    Read the article

  • Timer appears to be pausing when screen becomes inactive

    - by elchuppa
    So I have a very simple android activity that starts a timer when you hit a button. Timer timer = new Timer(); timer.schedule(new TimerTask() { @Override public void run() { doStuff(); } }, 15 * 60 * 1000); So this worked reasonably well for me when I was testing but as it turns out when the screen becomes inactive so does the timer. I was a bit surprised by this. I understand you need to create a service to have anything running in the background but I hadn't realized this is required for an activity in the foreground when the phone has inactivated the screen due to lack of activity. What confuses me is I think this worked as I expected originally and just in the last few weeks or so has the timer been affected by the phone saving power. I could be wrong though.. So basically my questions are: am I seeing expected behavior? Do I need to create all timers as services or somehow disallow powersaving? thanks for any advice, Patrick

    Read the article

  • Google MapView doesn't work after signed the app.

    - by user164589
    Hi guys, I am facing to android application signing problem. My application contains Google MapView. When I compile the app and run on the emulator, MapView works fine. But signed the app, MapView doesn't work. I've get Google Map API. This works on the simulator. I could sign the app once 2 months ago. Then I've upgraded the app. Now I need to sign the app again. Actually I don't know why signed app's mapView doesn't work. How to fix it ? Please advice. I used following steps when sign the app: Run Eclipse. Select the project. Right Click - Android Tools - Export Signed Application Package - Then Filled forms. (In forms, Validity years: 200, and all passwords are same.) Can you suggest me ? Thanks in advance.

    Read the article

  • Can I get the IE debugger to break into long-running javascript

    - by Brian Deacon
    I have a page that has a byzantine amount of javascript running. In IE only, and only version 8, I get a long-script warning that I can reliably reproduce. I suspect it is event handlers triggering themselves in an infinite loop. The Developer Tools are limping horribly under the weight of the script running, but I do seem to be able to get the log to tell me what line of script it was executing when I aborted, but it is inevitably some of the deep plumbing of the ExtJS code we use, and I can't tell where it is in my stack of code. A way of seeing the call stack would work, but preferably I'd like to be able to just break into the debugger when I get the long script warning so I can just step through the stack. There is a similar question posted, but the answers given were for a not-the-right-tool, or the not terribly helpful advice to eliminate half my code at a time on a binary hunt for the infinite loop. If my code were simple enough that I could do that, it probably wouldn't have gotten the infinite loop in the first place. If I could reproduce the problem in firebug, I'd probably be a lot happier too.

    Read the article

  • "for" loop inside another "for" loop

    - by jnkrois
    Hello everybody. I have quick question, and I'm sure some of you guys will be able to point out what I'm doing wrong. Basically, I want to create a table with (calendar), with the months and days of an entire year, and Im using php to do it, my code is doing something weird. What happens is that January is empty and the days for January are being put in february. My code so far is as follows: $months = 12; $monthsOfTheyear = array("Januany","February","March","April","May","June","July","August","September","October","November","December"); $currentMonth = date("n"); $currentYear = date("Y"); $daysOfTheMonth = cal_days_in_month(CAL_GREGORIAN, $currentMonth, $currentYear); for($i = 0; $i < $months; $i++){ echo " <tbody class='month'>"; echo " <tr> <td colspan='".$daysOfTheMonth."'> ".$monthsOfTheyear[$i]." </td> </tr> <tr> "; $daysOfEachMonth = cal_days_in_month(CAL_GREGORIAN, $i, $currentYear); for($d = 1; $d <= $daysOfEachMonth; $d++){ echo " <td> ".$d." </td> "; } echo " </tr> </tbody"; } I'm obviously doing something wrong, but I've been staring at the monitor for about an hour trying to figure it out. I'd appreciate any advice. Thanks

    Read the article

  • How to design app to be modular / support plugins

    - by Lee
    I'm currently in the process of refactoring my webplayer so that we'll be more easily able to run it on our other internet radio stations. Much of the setup between these players will be very similar, however, some will need to have different UI plugins / other plugins. Currently in the webplayer I do something like this in it's init(): _this.ui = new UI(); _this.ui.playlist = new Playlist(); _this.ui.channelDropdown = new ChannelDropdown(); _this.ui.timecode = ne Timecode(); etc etc This works fine but that blocks me into requiring those objects at run time. What I'd like to do is be able to add those based on the stations needs. Basically my question is, do I need to add some kind of "addPlugin()" functionality here? And if I do that, do I need to constantly check from my WebPlayer object if that plugin exists before it attempts to use it? Like... if (_hasPlugin('playlist')) this.plugins.playlist.add(track); I apologize if some of this might not be clear... really trying to get my head wrapped around all of this. I feel I'm closer but I'm still stuck. Any advice on how I should proceed with this would be greatly appreciated. Thanks in advance, Lee

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • Does .Net use Device Dependent or Device Independent Bitmaps?

    - by Brian
    When loading an image into memory, does .Net use DDB, DIB, or something else entirely? If possible, please cite your sources. I'm wondering because we currently have a classic ASP application that is using a 3rd party component to load images that is occasionally creating a “Not enough storage is available to process this command.” error. The error is very inconsistent but tends to happen on larger images (not always, but often). After resetting IIS, processing the same file again typically works just fine. After much research I have found that DDBs tend to have this problem when processing large images because they work out of video memory. Considering that we are running on a web server with an integrated video card and limited shared memory, this could certainly be our problem. We are in the early stages of converting our app to .Net and am wondering if using .Net for this might be a viable alternative to our current method which is why I am asking the question. Any advice is welcome :) but out of curiosity if nothing else, I am really hoping for an answer to the question; does .Net use DDB or DIB?

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • Touch draw in Quatz 2D/Core Graphics

    - by OgreSwamp
    Hello, I'm trying to implement "hand draw tool". At the moment algorythm looks like that (I don't insert any code because methods are quite big, will try to explain an idea): Drawing In touchesStarted: method I create NSMutableArray *pointsArray and add point into it. Call setNeedsDisplay: method. In touchesMoved: method I calculate points between last added point from the pointsArray and current point. Add all points to the pointsArray. Call setNeedsDisplay: method. In touchesFinished: event I calculate points between last added point from the array and current point. Set flag touchesWereFinished. Call setNeedsDisplay:. Render: drawRect: method checks is pointsArray != nil and is there any data in it. If there is - it starts to traw circles in each point of this array. If flag touchesWereFinished is set - save current context to the UIImage, release pointsArray, set it to nil and reset the flag. There are a lot disadvantages of this method: It is slow It becomes extremely slow when user touches and move finger for long time. Array becomes enormous "Lines" composed by circles are ugly I would like to change my algorithm to make it bit faster and line smoother. In result I would like to have lines like on the picture at following URL (sorry, not enough reputation to insert an image): http://2.bp.blogspot.com/_r5VzEAUYXJ4/SrOYp8tJCPI/AAAAAAAAAMw/ZwDKXiHlhV0/s320/SketchBook+Mobile(4).png Can you advice me, ho I can draw lines this way (smooth and slim on the edges)? I thought to draw circles with alpha gradient on the edges (to make lines smoother), but it will be extremely slowly IMHO. Thanks for help

    Read the article

  • Porting a web application to work in IE7

    - by Bears will eat you
    I'm developing a web application that uses lots of Javascript and CSS, both of my own creation and through third-party libraries. These include jQuery and Google Maps & Visualization JS APIs. I've been testing everything in Firefox 3. Things are peachy until it turns out the main target of this webapp is (cue sad trombone) IE7. I'm looking for caveats, advice, libraries, or other references to help make this transition as easy as possible (not that it's actually going to be easy). I've already tried IE7.js though it hasn't yet shown itself to be the silver bullet I was hoping for. I'm sure that it works as advertised, I think it's just not as all-encompassing as I'd like (example: colors like #4684EE and #DC3912, which are correctly rendered in FF3, are rendered as black in IE7, with or without IE7.js). Are there other libraries out there to help bring IE7 (more) in line with FF3? A corollary question: what debugger would you recommend for IE7? I'm currently using Firebug Lite, but it runs painfully slowly. Is there anything out there with similar features that I might have missed?

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • Getting batch_action functionality in Symfony 1.0

    - by Alex Ciminian
    I'm currently working on a web app written in Symfony. I'm supposed to add an "export to CSV" feature in the backend/administration part of the app for some modules. In the list view, there should be an "Export" button which should provide the user with a csv file of the elements that are displayed (considering filtering criteria). I've created a method in the actions class of the module that takes a comma separated list of ids and generates the CSV, but I'm not really sure how to add the link to it in the view. The problem is that the view doesn't exist anywhere, it's generated on the fly from the data in the generator.yml configuration file. I've posted the relevant part of the file below. I'm new to Symfony, so any help would be appreciated :). Thanks, Alex Update list: display: [id, =name, indemn, _status, _participants, _approved_, created_at] title: Lista actiuni object_actions: _edit: ~ _delete: ~ actions: _create: ~ export_csv: name: Export to CSV action: CSVExport params: id=csvActionSubmit filters: [name, county_id, _status_filter, activity_id] fields: id: name: Nr. crt. ... Thanks to your advice, I've managed to add a button that is linked to my action. The problem is that I also need to send some parameters to the action, because I may not want all the elements - filters may have been used. Unfortunately, the project is using Symfony 1.0, which doesn't support batch_actions. Currently, I'm working around this with Javascript (I parse the DOM to get the numeric ids (from the display table) and then build the link for the button. I really think there could be a better way for this.

    Read the article

  • Is going for a BCS the right move for me?

    - by Michel Carroll
    I'm at a fork in the road. I need somebody to give me some advice from their personal journey in IT. At the moment, I have a college diploma (2 years) in Computer Programmer, and about 2 years of professional experience in the field of software. I'm currently freelancing my programming skills to the public, and am enjoying a nice income, and the rewards of flexibly working on a variety of projects with different cool people. I'm young (21 years old), passionate about software, technology, the internet, and also business. I know if I ever want to dwell deeper into the software industry, I might have a hard time doing so without a Bachelors in Computer Science. On one side, I think I'm better off getting my BCS while I'm still young and malleable. Also, the thought of learning even more stuff in my field is really exciting to me. On the flip side, it means another 3-4 years of studying, and jeopardizing my chances of going on vacation and accumulating wealth for a long time. Considering that I'm already pretty successful with my college diploma, do you think it's a good idea for me to go get my BCS? Will it open up many more doors in the future?

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • How do you clear your mind after 8-10 hours per day of coding?

    - by Bryan
    Related Question- Ways to prepare your mind before coding?. I'm having a hard time taking my mind off of work projects in my personal time. It's not that I have a stressful job or tight deadlines; I love my job. I find that after spending the whole day writing code & trying to solve problems, I have an extremely hard time getting it out of my mind. I'm constantly thinking about the current project/problem/task all the time. It's keeping me from relaxing, and in the long run it just builds stress. Personal projects help to some extent, but mostly just to distract me. I still have source code bouncing around my head 16 hours a day. I'm still relatively new to the workforce. Have you struggled with this, perhaps as a young developer? How did you overcome it? Can anyone offer general advice on winding down after a long programming session?

    Read the article

  • Crystal Reports error 7 "Out of memory"

    - by pwee167
    Hi guys, I have a VB6 application, reportwriter, that runs a crystal report and outputs a pdf. I have a .net 3.5 asp application that calls the reportwriter dll. I am getting an error saying 7 "Out of memory", when trying to load the report. I have 4gb of memory and running windows 7 (64 bit). I also have Crystals Reports 11 with service pack 4 installed. I have registered the dll and referenced it in my project. I have run out of ideas what could be causing this. I need some help. Any advice will be very much appreciated. I suspect it has nothing to do with Crystal Reports, but I cannot be sure. Has anyone come across this error message/issue? Also, I have a friend running the identical project in almost identical PC (bar a few different personal applications installed). It works for him. Cheers.

    Read the article

  • How do I use HTML5's localStorage in a Google Chrome extension?

    - by davidkennedy85
    I am trying to develop an extension that will work with Awesome New Tab Page. I've followed the author's advice to the letter, but it doesn't seem like any of the script I add to my background page is being executed at all. Here's my background page: <script> var info = { poke: 1, width: 1, height: 1, path: "widget.html" } chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) { if (request === "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-poke") { chrome.extension.sendRequest( sender.id, { head: "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-pokeback", body: info, } ); } }); function initSelectedTab() { localStorage.setItem("selectedTab", "Something"); } initSelectedTab(); </script> Here is manifest.json: { "update_url": "http://clients2.google.com/service/update2/crx", "background_page": "background.html", "name": "Test Widget", "description": "Test widget for mgmiemnjjchgkmgbeljfocdjjnpjnmcg.", "icons": { "128": "icon.png" }, "version": "0.0.1" } Here is the relevant part of widget.html: <script> var selectedTab = localStorage.getItem("selectedTab"); document.write(selectedTab); </script> Every time, the browser just displays null. The local storage isn't being set at all, which makes me think the background page is completely disconnected. Do I have something wired up incorrectly?

    Read the article

  • MS Excel automation without macros in the generated reports. Any thoughts?

    - by ezeki77
    Hello! I know that the web is full of questions like this one, but I still haven't been able to apply the answers I can find to my situation. I realize there is VBA, but I always disliked having the program/macro living inside the Excel file, with the resulting bloat, security warnings, etc. I'm thinking along the lines of a VBScript that works on a set of Excel files while leaving them macro-free. Now, I've been able to "paint the first column blue" for all files in a directory following this approach, but I need to do more complex operations (charts, pivot tables, etc.), which would be much harder (impossible?) with VBScript than with VBA. For this specific example knowing how to remove all macros from all files after processing would be enough, but all suggestions are welcome. Any good references? Any advice on how to best approach external batch processing of Excel files will be appreciated. Thanks! PS: I eagerly tried Mark Hammond's great PyWin32 package, but the lack of documentation and interpreter feedback discouraged me.

    Read the article

  • Ajax Request not working. onSuccess and onFailure not triggering

    - by Kye
    Hi all, trying to make a page which will recursively call a function until a limit has been reached and then to stop. It uses an ajax query to call an external script (which just echo's "done" for now) howver with neither onSuccess or onFailure triggering i'm finding it hard to find the problem. Here is the javascript for it. In the header for the webpage there is a script to an ajax.js document which contains the request data. I know the ajax.js works as I've used it on another website var Rooms = "1"; var Items = "0"; var ccode = "9999/1"; var x = 0; function echo(string,start){ var ajaxDisplay = document.getElementById('ajaxDiv'); if(start) {ajaxDisplay.innerHTML = string;} else {ajaxDisplay.innerHTML = ajaxDisplay.innerHTML + string;} } function locations() { echo("Uploading location "+x+" of " + Rooms,true); Ajax.Request("Perform/location.php", { method:'get', parameters: {ccode: ccode, x: x}, onSuccess: function(reply) {alert("worked"); if(x<Rooms) { x++; locations(); } else { x=0; echo("Done",true); } }, onFailure: function() {alert("not worked"); echo("not done"); } }); alert("boo"); } Any help or advice will be most appreciated.

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >