Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 394/924 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • AutoVue for Agile Sessions at the Oracle Value Chain Summit 2013

    - by Pam Petropoulos
    At the upcoming Oracle Value Chain Summit, which takes place February 4 - 6, 2013 in San Francisco, CA, AutoVue Enterprise Visualization solutions will be covered in a variety of sessions within the Agile PLM solution area. Attend the following sessions during the Product Deep Dives & Demos Track, and discover the latest AutoVue for Agile capabilities, including how to streamline business processes, such as change management by creating ECRs directly from within CAD designs. Visual Decision Making to Optimize New Product Development and Introduction Date: Tuesday, February 5 Time: 12:45 pm to 1:30 pm Seeing the Forest: Next Generation Visualization Date: Wednesday, February 6 Time: 3:15 pm to 4:00 pm Next-Generation CAD Data Management: MCAD, ECAD, and Software Configuration Management Date: Wednesday, February 6 Time: 11:15 am to 12:00 pm Keep an eye on this blog for forthcoming details about each of these sessions. Don’t miss this opportunity to mingle with other AutoVue for Agile customers and meet one on one with the AutoVue product management and development team. Register now for the early bird rate of $195 and secure your spot at the Summit. Click here to register and learn more.

    Read the article

  • How can I generate a unique ID using a hash in Perl?

    - by sganesh
    I doing message transfer program between multiple clients and server. I want to generate unique message id for every messages. that should be generated by server and return to client. For message transfer I am using hash data structure, Ex: { api => POST, username => sganesh, pass => "pass", message => "hai", time => "current_time", } I want to generate unique id using this hash. I have tried some of the ways, MD5 and freeze but this give unreadable id. I want some meaningful or readable unique id. I have thought we can use micro seconds to differentiate the id but here the problem is multiple clients. In any situation my id should be unique. Can anyone help me out of this problem? Thanks in Advance.

    Read the article

  • SQL Server pivots? some way to set column names to values within a row

    - by ccsimpson3
    I am building a system of multiple trackers that are going to use a lot of the same columns so there is a table for the trackers, the tracker columns, then a cross reference for which columns go with which tracker, when a user inserts a tracker row the different column values are stored in multiple rows that share the same record id and store both the value and the name of the particular column. I need to find a way to dynamically change the column name of the value to be the column name that is stored in the same row. i.e. id | value | name ------------------ 23 | red | color 23 | fast | speed needs to look like this. id | color | speed ------------------ 23 | red | fast Any help is greatly appreciated, thank you.

    Read the article

  • MEA Oracle University Partner Enablement Update (22nd March)

    - by swalker
    Become an Oracle GoldenGate 10 Certified Implementation Specialist Let Oracle University help you become an Oracle GoldenGate 10 Certified Implementation Specialist. The following Boot Camp has been scheduled so that you can gain the required knowledge not only to develop and implement solutions that will drive your customers’ organizations to make better decisions, take informed actions, and run more-efficient business processes but also for you to pass the associated exam and get yourself specialized: Boot Camp Dates Location OPN Only Oracle GoldenGate 10 Implementation Boot Camp 26-28 Mar Dubai Oracle University OPN Only Boot Camps are co-funded by Oracle Alliances and Channels so are offered to you at very attractive prices. For prices, more information and assistance with registering please contact: Ion Georgescu eMail:  [email protected] Telephone:  +40 21.367.93.72

    Read the article

  • How do I avoid the complexity concerns of frameworks while keeping my team marketable?

    - by Desolate Planet
    When deciding upon how to design a software project with my colleagues, most suggestions tend to be for using specific frameworks "because it's popular in the job market" or "that's the framework that gets recruiters on the phone," and never what I'm looking for which is, "because it's a good fit for the project as it makes the system more adaptive to future changes and makes life easier for developers." I didn't start looking at projects in this way until I started reading up on domain-driven design. I've found that the actual domain is hidden deep under the frameworks used and it's hard to learn the business processes that have been implemented by the software product. Is there a way to marry the two competing goals: getting exposure as a development team while still being able to avoid complexity? Are frameworks that compromise, or are there other solutions out there?

    Read the article

  • While in CMD shell, copying files from host OS to guest VM locks files (VMware Player/Workstation)

    - by Malcolm
    We're running the latest versions of VMWare Player and Workstation for Windows. The following behavior is identical across both products. Problem: We open a CMD prompt in our guest OS (XP, Vista, Windows 7) and copy files from our host OS using the standard CMD shell copy command: copy z:\C$\testfiles The copy completes successfully, but from that point forward, all the files that were copied to our guest OS are now LOCKED on our host OS. This does not happen if we use Windows Explorer to copy files - it only happens when files are copied via the CMD shell. As mentioned at the start of this question, this behavior is reproducible in both VMWare Player and VMWare Workstation across multiple machines and multiple guest OS's. I've googled for a workaround, but without success. Any ideas appreciated. Malcolm

    Read the article

  • Is it me, or is it you? Does the sync work?

    - by bisi
    I have been on this for several hours now, trying to get a simple second folder to sync with my (paid) account. I cannot tell you how many times I removed all devices, removed stored passwords, killed all processes of u1, logged out and back in online...and still, the tick in the file browser (Synchronize this folder) is loading and loading and loading. Also, I have logged out, rebooted countless times. And this is after me somehow managing to get the u1 preferences to finally "connect" again. I have also checked the status of your services, and none are close to what I am experiencing. And I have checked the suggested related questions above! So please, just confirm whether it is a problem on my side, or a problem on your side.

    Read the article

  • SVN Feature Branch Method

    - by Seth
    I am getting a SVN server setup and will be using the feature branch method. I plan on having 1+ branches making up a release tag. How do I merge (?) multiple branches into the release tag, while still maintaining diffs and such? I've given an example of our workflow below. Multiple devs pull to local Create feature branch Commit to branch Use branch to build QA (Here is where my question starts) I need to have all the branches for the next build to be put into a build tag to be used to build Production

    Read the article

  • Design Stock Server Application in C++

    - by Avinash
    [Queston asked in ML Interview] Design an server/application which handle multiple incoming stock information ( stock symbol and values ) Server need to store the information in some cache ( need to design the data structure ), There are multiple client connected to server using tcp/ip socket. They will subscribe to particular request say stock symbol XYZ , and may be for more then one. As and when there is change the stock symbol server should broadcast the information to subscribed client. If tcp/ip write failed, server should handle the unregistration of the client. What various data structures will be used and how threading model will be ?

    Read the article

  • Laptop overheats while using Internet

    - by therealnube
    YES I have figured out that my laptop overheats while using the internet via an Ethernet port through a broadband connection.Strange isn't it ? Well I need to know WHY ? . I have installed ATI Graphics : Vesa Madison as they appear to be working very fine. Temperature of my i7-hp pavilion dv6 rises from 67degrees to a superhot 85+ and also to 106degrees where my laptop shuts down. When running torrents or Chromium I hardly open 4 Tabs so there is no chance for CPU over-loading due to multiple processes.NEED HELP.And Thank You for your time.

    Read the article

  • A 'do' statement at the end of my perl script never runs

    - by Jeremy Petzold
    In my main script, I am doing some archive manipulation. Once I have completed that, I want to run a separate script to upload my archives to and FTP server. Separately, these scripts work well. I want to add the FTP script to the end of my archive script so I only need to worry about scheduling one script to run and I want to guarantee that the first script completes it work before the FTP script is called. After looking at all the different methods to call my FTP script, I settled on 'do', however, when my do statement is at the end of the script, it never runs. When I place it in my main foreach loop, it runs fine, but it runs multiple times which I want to avoid since the FTP script can handle having multiple archives to upload. Is there something I am missing? Why does it not run? Thanks

    Read the article

  • reliably restarting services using upstart or runit

    - by murtaza52
    I want to reliably restart my app and web server processes on crash. If I understand correctly, runit starts every service as a child process. If the child process crashes this sends a signal to the parent process which in turn respawns the service as a child. How does this work in the case of upstart. Does it also spawn a child process like runit? I am considering using runit for this. Is that needed, or is upstart good enough for this ? I am using nginx for my web server and gunicorn (python) for my app server.

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • xubuntu 12.04 restarts after suspend - only from my account

    - by Yoav Aner
    After installing a clean xubuntu 12.04 I noticed that when I suspend, the computer suspends and turns itself off (you see the lights go off, and a click sound from the HD or fans), but then about 2 seconds later it turns itself back on again... The odd thing is that: It doesn't happen when booting from the liveCD I created another user account. When I log onto this account I can suspend fine. The computer stays off until I press the ON button When I remove my .config folder and it's clean - I can also suspend without problem on my account So it seems that something in my user config is causing this, but I can't work out what it might be. I tried diffing the two .config folders, and also all processes running with one account compared to the other (ps -ef |grep <username>), but couldn't find anything obvious that might be causing this...

    Read the article

  • <thead> and <tfoot> in Safari

    - by AJ
    I trying to print a page with multiple tables. The problem is that any of these tables may break and carry over to the next page. Have been trying to get the table header to repeat on the second page. Am currently using thead and setting the display property to table-header-group. This works just as expected in IE and firefox but the header will not repeat in Safari. Since we are using software that converts our page to a pdf document for printing...and the 3rd party software uses a Safari engine, we are stuck with this problem. Does anyone know a way / workaround to make headers repeat if the table spans multiple pages in Safari?

    Read the article

  • nodejs async.waterfall method

    - by user1513388
    Update 2 Complete code listing var request = require('request'); var cache = require('memory-cache'); var async = require('async'); var server = '172.16.221.190' var user = 'admin' var password ='Passw0rd' var dn ='\\VE\\Policy\\Objects' var jsonpayload = {"Username": user, "Password": password} async.waterfall([ //Get the API Key function(callback){ request.post({uri: 'http://' + server +'/sdk/authorize/', json: jsonpayload, headers: {'content_type': 'application/json'} }, function (e, r, body) { callback(null, body.APIKey); }) }, //List the credential objects function(apikey, callback){ var jsonpayload2 = {"ObjectDN": dn, "Recursive": true} request.post({uri: 'http://' + server +'/sdk/Config/enumerate?apikey=' + apikey, json: jsonpayload2, headers: {'content_type': 'application/json'} }, function (e, r, body) { var dns = []; for (var i = 0; i < body.Objects.length; i++) { dns.push({'name': body.Objects[i].Name, 'dn': body.Objects[i].DN}) } callback(null, dns, apikey); }) }, function(dns, apikey, callback){ // console.log(dns) var cb = []; for (var i = 0; i < dns.length; i++) { //Retrieve the credential var jsonpayload3 = {"CredentialPath": dns[i].dn, "Pattern": null, "Recursive": false} console.log(dns[i].dn) request.post({uri: 'http://' + server +'/sdk/credentials/retrieve?apikey=' + apikey, json: jsonpayload3, headers: {'content_type': 'application/json'} }, function (e, r, body) { // console.log(body) cb.push({'cl': body.Classname}) callback(null, cb, apikey); console.log(cb) }); } } ], function (err, result) { // console.log(result) // result now equals 'done' }); Update: I'm building a small application that needs to make multiple HTTP calls to a an external API and amalgamates the results into a single object or array. e.g. Connect to endpoint and get auth key - pass auth key to step 2 Connect to endpoint using auth key and get JSON results - create an object containing summary results and pass to step 3. Iterate over passed object summary results and call API for each item in the object to get detailed information for each summary line Create a single JSON data structure that contains the summary and detail information. The original question below outlines what I've tried so far! Original Question: Will the async.waterfall method support multiple callbacks? i.e. Iterate over an array thats passed from a previous item in the chain, then invoke multiple http requests each of which would have their own callbacks. e.g, sync.waterfall([ function(dns, key, callback){ var cb = []; for (var i = 0; i < dns.length; i++) { //Retrieve the credential var jsonpayload3 = {"Cred": dns[i].DN, "Pattern": null, "Recursive": false} console.log(dns[i].DN) request.post({uri: 'http://' + vedserver +'/api/cred/retrieve?apikey=' + key, json: jsonpayload3, headers: {'content_type': 'application/json'} }, function (e, r, body) { console.log(body) cb.push({'cl': body.Classname}) callback(null, cb, key); }); } }

    Read the article

  • Should I aim for fewer HTTP requests or more cacheable CSS files?

    - by Jonathan Hanson
    We're being told that fewer HTTP requests per page load is a Good Thing. The extreme form of that for CSS would be to have a single, unique CSS file per page, with any shared site-wide styles duplicated in each file. But there's a trade off there. If you have separate shared global CSS files, they can be cached once when the front page is loaded and then re-used on multiple pages, thereby reducing the necessary size of the page-specific CSS files. So which is better in real-world practice? Shorter CSS files through multiple discrete CSS files that are cacheable, or fewer HTTP requests through fewer-but-larger CSS files?

    Read the article

  • Django Redundancy

    - by Sunsu
    I've read many things about scaling Django and the new multiple-DB support makes it so much easier. However, I have not been able to find much information on good ways to create a fully redundant system (not just one that scales). I realize there are many things that go into this problem, but the real thing I'm having trouble solving well is Database redundancy. Is it possible to set up a "write slave" using django's new multiple-DB support? If I had IP failover support it seems like having a write slave would help solve the problem. Simple MySQL replication doesn't seem like it will work due to slave lag right? What's the typical method of creating a redundant database system? Any input or guidance you guys have would be greatly appreciated. I realize I could be asking the wrong questions!

    Read the article

  • /dev/sda1 100% Mysql to blame?

    - by SJP
    I have a an API running that receives raw binaries, processes them, and then stores metadata about the bins in a mysql database. I have been running it for a couple days on a VM. Today the API stopped processing the mySQL commands. After using the command df-h the results were: root@mwdb1:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 104G 99G 0 100% / udev 16G 4.0K 16G 1% /dev tmpfs 6.3G 364K 6.3G 1% /run none 5.0M 0 5.0M 0% /run/lock none 16G 0 16G 0% /run/shm /dev/sdb1 5.5T 42G 5.2T 1% /data sda1 is at 100%

    Read the article

  • can anyone explain the why the the 1st example gets different results than the following 2

    - by klumsy
    $b = (2,3) $myarray1 = @(,$b,$b) $myarray1[0].length #this will be 1 $myarray1[1].length $myarray2 = @( ,$b ,$b ) $myarray2[0].length #this will be 2 $myarray[1].length $myarray3 = @(,$b ,$b ) $myarray3[0].length #this will be 2 $myarray3[1].length UPDATE I think on #powershell IRC we have worked it out, Here is another example that demonstrates the danger of breaking with the comma on the following line rather than the top line when listing multiple items in an array over multiple lines. $b = (1..20) $a = @( $b, $b ,$b, $b, $b ,$b) for($i=0;$i -lt $a.length;$i++) { $a[$i].length } "--------" $a = @( $b, $b ,$b ,$b, $b ,$b) for($i=0;$i -lt $a.length;$i++) { $a[$i].length } produces 20 20 20 20 20 20 -------- 20 20 20 1 20 20 I'm curious how people will explain this. I think i understand it now, but would have trouble explaining it in a concise understandable fashion, though the above example goes somewhat towards that goal.

    Read the article

  • Designing an API with compile-time option to remove first parameter to most functions and use a glob

    - by tomlogic
    I'm trying to design a portable API in ANSI C89/ISO C90 to access a wireless networking device on a serial interface. The library will have multiple network layers, and various versions need to run on embedded devices as small as an 8-bit micro with 32K of code and 2K of data, on up to embedded devices with a megabyte or more of code and data. In most cases, the target processor will have a single network interface and I'll want to use a single global structure with all state information for that device. I don't want to pass a pointer to that structure through the network layers. In a few cases (e.g., device with more resources that needs to live on two networks) I will interface to multiple devices, each with their own global state, and will need to pass a pointer to that state (or an index to a state array) through the layers. I came up with two possible solutions, but neither one is particularly pretty. Keep in mind that the full driver will potentially be 20,000 lines or more, cover multiple files, and contain hundreds of functions. The first solution requires a macro that discards the first parameter for every function that needs to access the global state: // network.h typedef struct dev_t { int var; long othervar; char name[20]; } dev_t; #ifdef IF_MULTI #define foo_function( x, a, b, c) _foo_function( x, a, b, c) #define bar_function( x) _bar_function( x) #else extern dev_t DEV; #define IFACE (&DEV) #define foo_function( x, a, b, c) _foo_function( a, b, c) #define bar_function( x) _bar_function( ) #endif int bar_function( dev_t *IFACE); int foo_function( dev_t *IFACE, int a, long b, char *c); // network.c #ifndef IF_MULTI dev_t DEV; #endif int bar_function( dev_t *IFACE) { memset( IFACE, 0, sizeof *IFACE); return 0; } int foo_function( dev_t *IFACE, int a, long b, char *c) { bar_function( IFACE); IFACE->var = a; IFACE->othervar = b; strcpy( IFACE->name, c); return 0; } The second solution defines macros to use in the function declarations: // network.h typedef struct dev_t { int var; long othervar; char name[20]; } dev_t; #ifdef IF_MULTI #define DEV_PARAM_ONLY dev_t *IFACE #define DEV_PARAM DEV_PARAM_ONLY, #else extern dev_t DEV; #define IFACE (&DEV) #define DEV_PARAM_ONLY void #define DEV_PARAM #endif int bar_function( DEV_PARAM_ONLY); // I don't like the missing comma between DEV_PARAM and arg2... int foo_function( DEV_PARAM int a, long b, char *c); // network.c #ifndef IF_MULTI dev_t DEV; #endif int bar_function( DEV_PARAM_ONLY) { memset( IFACE, 0, sizeof *IFACE); return 0; } int foo_function( DEV_PARAM int a, long b, char *c) { bar_function( IFACE); IFACE->var = a; IFACE->othervar = b; strcpy( IFACE->name, c); return 0; } The C code to access either method remains the same: // multi.c - example of multiple interfaces #define IF_MULTI #include "network.h" dev_t if0, if1; int main() { foo_function( &if0, -1, 3.1415926, "public"); foo_function( &if1, 42, 3.1415926, "private"); return 0; } // single.c - example of a single interface #include "network.h" int main() { foo_function( 11, 1.0, "network"); return 0; } Is there a cleaner method that I haven't figured out? I lean toward the second since it should be easier to maintain, and it's clearer that there's some macro magic in the parameters to the function. Also, the first method requires prefixing the function names with "_" when I want to use them as function pointers. I really do want to remove the parameter in the "single interface" case to eliminate unnecessary code to push the parameter onto the stack, and to allow the function to access the first "real" parameter in a register instead of loading it from the stack. And, if at all possible, I don't want to have to maintain two separate codebases. Thoughts? Ideas? Examples of something similar in existing code? (Note that using C++ isn't an option, since some of the planned targets don't have a C++ compiler available.)

    Read the article

  • Android use of an string array on another method

    - by spagi
    Hi all. Im trying to make an activity that has a multiple choice dialog after you push a button. In there you select from a list of things. But these things are received from a web method before the dialog appears. So I create a string array after I receive them inside the onCreate to initialise it there with the correct size. But my dialog method then cant get the array because propably its out of its scope. My code looks like this @Override protected Dialog onCreateDialog(int id) //Here is where the array is loaded to the multiple select dialog etc @Override public void onCreate(Bundle savedInstanceState) //Here is where i initialise the array and get its contents etc I cant initialise my array when the class starts because I dont know its size yet. This has to do something with the scopes of my variables and I am pretty confused

    Read the article

  • MySQL Locks: order of unblocked threads

    - by teehoo
    I have a MySQL ISAM table being accessed my multiple php instances. Right now I'm using a WRITE lock to serialize access to this table. My question is how do I ensure that the PHP instances get served on a First-Come-First-Serve basis? Or is this the default behaviour? The official MySQL documentation doesn't mention anything about the blocked thread order for threads of the same lock type (ie multiple threads attempting a WRITE LOCK). It only mentions that a WRITER will jump to the front of the waiting queue if READERS are waiting.

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >