Search Results

Search found 7884 results on 316 pages for 'ben record'.

Page 70/316 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • Install Base Transaction Error Troubleshooting

    - by LuciaC
    Oracle Installed Base is an item instance life cycle tracking application that facilitates enterprise-wide life cycle item management and tracking capability.In a typical process flow a sales order is created and shipped, this updates Inventory and creates a new item instance in Install Base (IB).  The Inventory update results in a record being placed in the SFM Event Queue.  If the record is successfully processed the IB tables are updated, if there is an error the record is placed in the csi_txn_errors table and the error needs to be resolved so that the IB instance can be created.It's extremely important to be proactive and monitor IB Transaction Errors regularly.  Errors cascade and can build up exponentially if not resolved. Due to this cascade effect, error records need to be considered as a whole and not individually; the root cause of any error needs to be resolved first and this may result in the subsequent errors resolving themselves. Install Base Transaction Error Diagnostic Program In the past the IBtxnerr.sql script was used to diagnose transaction errors, this is now replaced by an enhanced concurrent program version of the script. See the following note for details of how to download, install and run the concurrent program as well as details of how to interpret the results: Doc ID 1501025.1 - Install Base Transaction Error Diagnostic Program  The program provides comprehensive information about the errors found as well as links to known knowledge articles which can help to resolve the specific error. Troubleshooting Watch the replay of the 'EBS CRM: 11i and R12 Transaction Error Troubleshooting - an Overview' webcast or download the presentation PDF (go to Doc ID 1455786.1 and click on 'Archived 2011' tab).  The webcast and PDF include more information, including SQL statements that you can use to identify errors and their sources as well as recommended setup and troubleshooting tips. Refer to these notes for comprehensive information: Doc ID 1275326.1: E-Business Oracle Install Base Product Information Center Doc ID 1289858.1: Install Base Transaction Errors Master Repository Doc ID: 577978.1: Troubleshooting Install Base Errors in the Transaction Errors Processing Form  Don't forget your Install Base Community where you can ask questions to help you resolve your IB transaction errors.

    Read the article

  • Google Analytics custom variables and how are they recorded?

    - by mrtsherman
    I have been asked to add GA custom variable tracking to my company's website. The company website uses server side includes, so making modifications to the tracking code happens identically everywhere. Maintenance is therefore a headache. Also, GA takes about twenty-four hours for custom variables to start showing up in reports and that makes troubleshooting a headache. So if you have custom variables // visitor level tracking, id = 12345 _gaq.push(['_setCustomVar', 1, 'id', '12345', 1]); // page level tracking, email = [email protected] _gaq.push(['_setCustomVar', 1, 'email', '[email protected]', 1]); The marketing people want the following out of this: User visits site and we record a unique id for them. Whenever they return this id will be used in GA. User signs up for our newsletter on page X and we record their email address. Whenever they return this email address is used in GA. Now a big problem for me is that I don't use GA and the marketing people don't use custom variables. So we don't actually know how this will work. Do I want Page, Session or Visitor level tracking? What happens because the same GA code is used on every page? If they visit the email sign up form and we record the email address, but then they go somewhere else where email is nonexistent will the value get 'overwritten.' Sorry for the long question, but there are a lot of unknowns for a GA noob.

    Read the article

  • How should I implement Transaction database EJB 3.0

    - by JamesBoyZ
    In the CustomerTransactions entity, I have the following field to record what the customer bought: @ManyToMany private List<Item> listOfItemsBought; When I think more about this field, there's a chance it may not work because merchants are allowed to change item's information (e.g. price, discount, etc...). Hence, this field will not be able to record what the customer actually bought when the transaction occurred. At the moment, I can only think of 2 ways to make it work. I will record the transaction details into a String field. I feel that this way would be messy if I need to extract some information about the transaction later on. Whenever the merchant changes an item's information, I will not update directly to that item's fields. Instead, I will create another new item with all the new information and keep the old item untouched. I feel that this way is better because I can easily extract information about the transaction later on. However, the bad side is that my Item table may contain a lot of rows. I'd be very grateful if someone could give me an advice on how I should tackle this problem. UPDATE: I'd like to add more information about the current design. public class Customer implements Serializable { @OneToMany private List<CustomerTransactions> listOfTransactions; } public class CustomerTransactions implements Serializable { @ManyToMany private List<Item> listOfItemsBought; } public class Merchant implements Serializable { @OneToMany private List<Item> listOfSellingItems; }

    Read the article

  • Saving child collections with NHibernate

    - by Ben
    Hi, I am in the process or learning NHibernate so bare with me. I have an Order class and a Transaction class. Order has a one to many association with transaction. The transaction table in my database has a not null constraint on the OrderId foreign key. Order class: public class Order { public virtual Guid Id { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual decimal Total { get; set; } public virtual ICollection<Transaction> Transactions { get; set; } public Order() { Transactions = new HashSet<Transaction>(); } } Order Mapping: <class name="Order" table="Orders"> <cache usage="read-write"/> <id name="Id"> <generator class="guid"/> </id> <property name="CreatedOn" type="datetime"/> <property name="Total" type="decimal"/> <set name="Transactions" table="Transactions" lazy="false" inverse="true"> <key column="OrderId"/> <one-to-many class="Transaction"/> </set> Transaction Class: public class Transaction { public virtual Guid Id { get; set; } public virtual DateTime ExecutedOn { get; set; } public virtual bool Success { get; set; } public virtual Order Order { get; set; } } Transaction Mapping: <class name="Transaction" table="Transactions"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="ExecutedOn" type="datetime"/> <property name="Success" type="bool"/> <many-to-one name="Order" class="Order" column="OrderId" not-null="true"/> Really I don't want a bidirectional association. There is no need for my transaction objects to reference their order object directly (I just need to access the transactions of an order). However, I had to add this so that Order.Transactions is persisted to the database: Repository: public void Update(Order entity) { using (ISession session = NHibernateHelper.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Update(entity); foreach (var tx in entity.Transactions) { tx.Order = entity; session.SaveOrUpdate(tx); } transaction.Commit(); } } } My problem is that this will then issue an update for every transaction on the order collection (regardless of whether it has changed or not). What I was trying to get around was having to explicitly save the transaction before saving the order and instead just add the transactions to the order and then save the order: public void Can_add_transaction_to_existing_order() { var orderRepo = new OrderRepository(); var order = orderRepo.GetById(new Guid("aa3b5d04-c5c8-4ad9-9b3e-9ce73e488a9f")); Transaction tx = new Transaction(); tx.ExecutedOn = DateTime.Now; tx.Success = true; order.Transactions.Add(tx); orderRepo.Update(order); } Although I have found quite a few articles covering the set up of a one-to-many association, most of these discuss retrieving of data and not persisting back. Many thanks, Ben

    Read the article

  • Why are changes to coffeescript files not being compiled when my Rails 3.2.0 app is in development mode?

    - by ben
    Normally, any changes I make to .js.coffee files in my Rails 3.2.0 app in development mode take effect when I refresh the page. All of a sudden, this is not happening. If I do rake assets:precompile, then the changes are shown, but then if I do rake assets:clean they go back to not being shown. What is causing this? Edit: Restarting the server makes the changes show. Why isn't this happening automatically as before? Edit: Here is my development.rb Myapp::Application.configure do # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Raise exception on mass assignment protection for Active Record models config.active_record.mass_assignment_sanitizer = :strict # Log the query plan for queries taking more than this (works # with SQLite, MySQL, and PostgreSQL) config.active_record.auto_explain_threshold_in_seconds = 0.5 # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = true config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.log_level = :warn end

    Read the article

  • I'm looking for this graph solution

    - by Ben Fransen
    Hello all, Recently I found a pretty graph when I was browsing the adminskins at ThemeForest and in one of the templates I found a really nice, smooth, clean graph. But I've searching arround but so far without luck finding out which solution is used. And example can be found at: http://enstyled.com/adminus/original/page.html The source of this graph looks like: <table class="stats bar" cellpadding="0" cellspacing="0" width="100%"> <thead> <tr> <td>&nbsp;</td> <th scope="col">02.09</th> <th scope="col">03.09</th> <th scope="col">04.09</th> <th scope="col">05.09</th> <th scope="col">06.09</th> <th scope="col">07.09</th> <th scope="col">08.09</th> <th scope="col">09.09</th> <th scope="col">10.09</th> <th scope="col">11.09</th> <th scope="col">12.09</th> <th scope="col">01.10</th> <th scope="col">02.10</th> <th scope="col">03.10</th> </tr> </thead> <tbody> <tr> <th scope="row">Page views</th> <td>1800</td> <td>900</td> <td>700</td> <td>1200</td> <td>600</td> <td>2800</td> <td>3200</td> <td>500</td> <td>2200</td> <td>1000</td> <td>1200</td> <td>700</td> <td>650</td> <td>800</td> </tr> <tr> <th scope="row">Unique visitors</th> <td>1600</td> <td>650</td> <td>550</td> <td>900</td> <td>500</td> <td>2300</td> <td>2700</td> <td>350</td> <td>1700</td> <td>600</td> <td>1000</td> <td>500</td> <td>400</td> <td>700</td> </tr> </tbody> </table> Can someone please tell me which graphingsolution is used here? Thanks in advance, Ben Fransen

    Read the article

  • SQL Compact performance on device

    - by Ben M
    My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each). The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs. All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment. For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas? For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.

    Read the article

  • Android - Getting audio to play through earpiece

    - by Donal Rafferty
    I currently have code that reads a recording in from the devices mic using the AudioRecord class and then playing it back out using the AudioTrack class. My problem is that when I play it out it plays vis the speaker phone. I want it to play out via the ear piece on the device. Here is my code: public class LoopProg extends Activity { boolean isRecording; //currently not used AudioManager am; int count = 0; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); am = (AudioManager) getSystemService(Context.AUDIO_SERVICE); am.setMicrophoneMute(true); while(count <= 1000000){ Record record = new Record(); record.run(); count ++; Log.d("COUNT", "Count is : " + count); } } public class Record extends Thread { static final int bufferSize = 200000; final short[] buffer = new short[bufferSize]; short[] readBuffer = new short[bufferSize]; public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM); am.setRouting(AudioManager.MODE_NORMAL,1, AudioManager.STREAM_MUSIC); int ok = am.getRouting(AudioManager.ROUTE_EARPIECE); Log.d("ROUTING", "getRouting = " + ok); setVolumeControlStream(AudioManager.STREAM_VOICE_CALL); //am.setSpeakerphoneOn(true); Log.d("SPEAKERPHONE", "Is speakerphone on? : " + am.isSpeakerphoneOn()); am.setSpeakerphoneOn(false); Log.d("SPEAKERPHONE", "Is speakerphone on? : " + am.isSpeakerphoneOn()); atrack.setPlaybackRate(11025); byte[] buffer = new byte[buffersize]; arec.startRecording(); atrack.play(); while(isRecording) { arec.read(buffer, 0, buffersize); atrack.write(buffer, 0, buffer.length); } arec.stop(); atrack.stop(); isRecording = false; } } } As you can see if the code I have tried using the AudioManager class and its methods including the deprecated setRouting method and nothing works, the setSpeatPoneOn method seems to have no effect at all, neither does the routing method. Has anyone got any ideas on how to get it to play via the earpiece instead of the spaker phone?

    Read the article

  • Jini : single server with multiple clients

    - by user200340
    Hi all, I have a question about how to make multiple clients can access a single file located on server side and keep the file consistent. I have a simple PhoneBook server-client Jini program running at the moment, and server only provides some getter functions to clients, such as getName(String number), getNumber(String name) from a PhoneBook class(serializable), phonebook data are stored in a text file (phonebook.txt) at the moment. I have tried to implement some functions allowing to write a new records into the phonebook.txt file. If the writing record (name) is existing, an integer number will be added into the writing record. for example the existing phonebook.txt is .... John 01-01010101 .... if the writing record is "John 01-12345678",then "John_1 01-12345678" will be writen into phonebook.txt However, if i start with two clients A and B (on the same machine using localhost), and A tries to write "John 01-11111111", B tries to write "John 01-22222222". The early record will be overwritten later record. So, there must be something i did complete wrong. My client and server code are just like Jini HelloWorld example. My server side code is . 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. registrations are saved into a HashTable 4. for every discovered lookup service, i use registrar to register the ServiceItem, ServiceItem contains a null attributeSets, a null serviceId, and a service. The client code has: 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. a ServiceTemplate with null attributeSets, a null serviceId and a type, the type is the interface class. 4. for each found ServiceRegistrar, if it can find the looking for ServiceTemplate, the returned Object is cast into the type of the interface class. I have tried to google more details, and i found JavaSpace could be the one i missed. But i am still not sure about it (i only start Jini for a very short time). So any help would be greatly appreciated.

    Read the article

  • FASM vc MASM trasnlation problem in mov si, offset msg

    - by Ruben Trancoso
    hi folks, just did my first test with MASM and FASM with the same code (almos) and I falled in trouble. The only difference is that to produce just the 104 bytes I need to write to MBR in FASM I put org 7c00h and in MASM 0h. The problem is on the mov si, offset msg that in the first case transletes it to 44 7C (7c44h) and with masm translates to 44 00 (0044h)! but just when I change org 7c00h to org 0h in MASM. Otherwise it will produce the entire segment from 0 to 7dff. how do I solve it? or in short, how to make MASM produce a binary that begins at 7c00h as it first byte and subsequent jumps remain relative to 7c00h? .model TINY .code org 7c00h ; Boot entry point. Address 07c0:0000 on the computer memory xor ax, ax ; Zero out ax mov ds, ax ; Set data segment to base of RAM jmp start ; Jump to the first byte after DOS boot record data ; ---------------------------------------------------------------------- ; DOS boot record data ; ---------------------------------------------------------------------- brINT13Flag db 90h ; 0002h - 0EH for INT13 AH=42 READ brOEM db 'MSDOS5.0' ; 0003h - OEM name & DOS version (8 chars) brBPS dw 512 ; 000Bh - Bytes/sector brSPC db 1 ; 000Dh - Sectors/cluster brResCount dw 1 ; 000Eh - Reserved (boot) sectors brFATs db 2 ; 0010h - FAT copies brRootEntries dw 0E0h ; 0011h - Root directory entries brSectorCount dw 2880 ; 0013h - Sectors in volume, < 32MB brMedia db 240 ; 0015h - Media descriptor brSPF dw 9 ; 0016h - Sectors per FAT brSPH dw 18 ; 0018h - Sectors per track brHPC dw 2 ; 001Ah - Number of Heads brHidden dd 0 ; 001Ch - Hidden sectors brSectors dd 0 ; 0020h - Total number of sectors db 0 ; 0024h - Physical drive no. db 0 ; 0025h - Reserved (FAT32) db 29h ; 0026h - Extended boot record sig brSerialNum dd 404418EAh ; 0027h - Volume serial number (random) brLabel db 'OSAdventure' ; 002Bh - Volume label (11 chars) brFSID db 'FAT12 ' ; 0036h - File System ID (8 chars) ;------------------------------------------------------------------------ ; Boot code ; ---------------------------------------------------------------------- start: mov si, offset msg call showmsg hang: jmp hang msg db 'Loading...',0 showmsg: lodsb cmp al, 0 jz showmsgd push si mov bx, 0007 mov ah, 0eh int 10h pop si jmp showmsg showmsgd: retn ; ---------------------------------------------------------------------- ; Boot record signature ; ---------------------------------------------------------------------- dw 0AA55h ; Boot record signature END

    Read the article

  • SQL Server 2005 - query with case statement

    - by user329266
    Trying to put a single query together to be used eventually in a SQL Server 2005 report. I need to: Pull in all distinct records for values in the "eventid" column for a time frame - this seems to work. For each eventid referenced above, I need to search for all instances of the same eventid to see if there is another record with TaskName like 'review1%'. Again, this seems to work. This is where things get complicated: For each record where TaskName is like review1, I need to see if another record exists with the same eventid and where TaskName='End'. Utimately, I need a count of how many records have TaskName like 'review1%', and then how many have TaskName like 'review1%' AND TaskName='End'. I would think this could be accomplished by setting a new value for each record, and for the eventid, if a record exists with TaskName='End', set to 1, and if not, set to 0. The query below seems to accomplish item #1 above: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000'))) AS T WHERE seq = 1 order by eventid And the query below seems to accomplish #2: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 order by eventid This will bring back the eventid's that also have a TaskName='End': SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and eventid in (Select eventid from eventrecords where TaskName = 'End') order by eventid So I've tried the following to TRY to accomplish #3: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and case when (eventid in (Select eventid from eventrecords where TaskName = 'End') then 1 else 0) as bit end order by eventid When I try to run this, I get: "Incorrect syntax near the keyword 'then'." Not sure what I'm doing wrong. Haven't seen any examples anywhere quite like this. I should mention that eventrecords has a primary key, but it doesn't seem to help anything when I include it, and I am not permitted to change the table. (ugh) I've received one suggestion to use a cursor and temporary table, but am not sure how badley that would bog down performance when the report is running. Thanks in advance.

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • Issues with LINQ (to Entity) [adding records]

    - by Mario
    I am using LINQ to Entity in a project, where I pull a bunch of data (from the database) and organize it into a bunch of objects and save those to the database. I have not had problems writing to the db before using LINQ to Entity, but I have run into a snag with this particular one. Here's the error I get (this is the "InnerException", the exception itself is useless!): New transaction is not allowed because there are other threads running in the session. I have seen that before, when I was trying to save my changes inside a loop. In this case, the loop finishes, and it tries to make that call, only to give me the exception. Here's the current code: try { //finalResult is a list of the keys to match on for the records being pulled foreach (int i in finalResult) { var queryEff = (from eff in dbMRI.MemberEligibility where eff.Member_Key == i && eff.EffDate >= DateTime.Now select eff.EffDate).Min(); if (queryEff != null) { //Add a record to the Process table Process prRecord = new Process(); prRecord.GroupData = qa; prRecord.Member_Key = i; prRecord.ProcessDate = DateTime.Now; prRecord.RecordType = "F"; prRecord.UsernameMarkedBy = "Autocard"; prRecord.GroupsId = qa.GroupsID; prRecord.Quantity = 2; prRecord.EffectiveDate = queryEff; dbMRI.AddObject("Process", prRecord); } } dbMRI.SaveChanges(); //<-- Crashes here foreach (int i in finalResult) { var queryProc = from pro in dbMRI.Process where pro.Member_Key == i && pro.UsernameMarkedBy == "Autocard" select pro; foreach (var qp in queryProc) { Audit aud = new Audit(); aud.Member_Key = i; aud.ProcessId = qp.ProcessId; aud.MarkDate = DateTime.Now; aud.MarkedByUsername = "Autocard"; aud.GroupData = qa; dbMRI.AddObject("Audit", aud); } } dbMRI.SaveChanges(); //<-- AND here (if the first one is commented out) } catch (Exception e) { //Do Something here } Basically, I need it to insert a record, get the identity for that inserted record and insert a record into another table with the identity from the first record. Given some other constraints, it is not possible to create a FK relationship between the two (I've tried, but some other parts of the app won't allow it, AND my DBA team for whatever reason hates FK's, but that's for a different topic :)) Any ideas what might be causing this? Thank!

    Read the article

  • how to wait multiple function processing to finish

    - by user351412
    I have a problem about multiple function processing , listed as below code, the main function is btnEvalClick, I have try to use alter native 1and 2 to wait the function not move to next record before theprocessed function finish, but it does not work //private function btnEvalClick(event:Event):void { // var i:int; // for(i= 0; i < (dataArr1.length); i++) { // dispatchEvent( new FlexEvent('test') ); // callfunc1('cydatGMX'); //call function 1 // callfun2('cydatGMO'); //call function 1 // editSave(); //save record (HTTP) //## Alternative 1 //if (String(event) == 'SAVEOK') { // RecMov('next'); //move record if save = OK //} //## Alternative 2 //while (waitfc == '') // if waitfc not 'OK' continue looping //{ // z = z + 1; //} // RecMov('next'); //Move to next record to process //} //private function callfunc1(tasal:String):void { // var mySO :SharedObject; // var myDP: Array; // var i:int; // var prm:Array; // try // { // mySO = SharedObject.getLocal(tasal,'/'); // prm = mySO.data.txt.split('?'); // for(i=0; i < (prm.length - 1); i++) { // myDP = prm[i].toString().split('^'); // if ( myDP[0].toString() == String(dataArr1[dg].MatrixCDCol)){ // myDPX = myDP; // break; // } // } // } // catch (err:Error) { // Alert.show('Limit object creation fail (' + tasal + '), please retry ); // } //} //private function editSave():void //{ // var parameters:* = // { // 'CertID': CertIDCol.text, 'AssetID': AssetIDCol.text, 'CertDate': cdt, //'Ccatat': CcatatCol.text, 'CertBy': CertByCol.text, 'StatusID': StatusIDCol.text, //'UpdDate': lele, 'UpdUsr': ApplicationState.instance.luNm }; // doRequest('Update', parameters, saveItemHandler); //} //private function doRequest(method_name:String, parameters:Object, callback:Function):void // { // add the method to the parameters list // parameters['method'] = (method_name + 'ASC'); // gateway.request = parameters; // var call:AsyncToken = gateway.send(); // call.request_params = gateway.request; // call.handler = callback; // } //private function saveItemHandler(e:Object):void // { // if (e.isError) // { // Alert.show('Error: ' + e.data.error); // } // else // { // Alert.show('Record Saved..'); // waitfc = 'OK'; // dispatchEvent( new FlexEvent('SAVEOK') ); // } // }

    Read the article

  • Performance - FunctionCall vs Event vs Action vs Delegate

    - by hwcverwe
    Currently I am using Microsoft Sync Framework to synchronize databases. I need to gather information per record which is inserted/updated/deleted by Microsoft Sync Framework and do something with this information. The sync speed can go over 50.000 records per minute. So that means my additional code need to be very lightweight otherwise it will be a huge performance penalty. Microsoft Sync Framework raises an SyncProgress event for each record. I am subscribed to that code like this: // Assembly1 SyncProvider.SyncProgress += OnSyncProgress; // .... private void OnSyncProgress(object sender, DbSyncProgressEventArgs e) { switch (args.Stage) { case DbSyncStage.ApplyingInserts: // MethodCall/Delegate/Action<>/EventHandler<> => HandleInsertedRecordInformation // Do something with inserted record info break; case DbSyncStage.ApplyingUpdates: // MethodCall/Delegate/Action<>/EventHandler<> => HandleUpdatedRecordInformation // Do something with updated record info break; case DbSyncStage.ApplyingDeletes: // MethodCall/Delegate/Action<>/EventHandler<> => HandleDeletedRecordInformation // Do something with deleted record info break; } } Somewhere else in another assembly I have three methods: // Assembly2 public class SyncInformation { public void HandleInsertedRecordInformation(...) {...} public void HandleUpdatedRecordInformation(...) {...} public void HandleInsertedRecordInformation(...) {...} } Assembly2 has a reference to Assembly1. So Assembly1 does not know anything about the existence of the SyncInformation class which need to handle the gathered information. So I have the following options to trigger this code: use events and subscribe on it in Assembly2 1.1. EventHandler< 1.2. Action< 1.3. Delegates using dependency injection: public class Assembly2.SyncInformation : Assembly1.ISyncInformation Other? I know the performance depends on: OnSyncProgress switch using a method call, delegate, Action< or EventHandler< Implementation of SyncInformation class I currently don't care about the implementation of the SyncInformation class. I am mainly focused on the OnSyncProgress method and how to call the SyncInformation methods. So my questions are: What is the most efficient approach? What is the most in-efficient approach? Is there a better way than using a switch in OnSyncProgress?

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • mono 3.0.2 + xsp + lighttpd delivers empty page

    - by Nefal Warnets
    I needed MVC 4 (and basic .NET 4.5) support so I downloaded mono 3.0.2 and deployed it on an lighttpd 1.4.28 installation, together with xsp-2.10.2 (was the latest I could find). After going through the config tutorials I managed to get the fastcgi server to spawn, but all pages are served empty. even if I go to nonexistant urls or direct .aspx files I get an empty HTTP 200 response. The log file on Debug shows nothing suspicious. Here is the log: [2012-12-12 15:15:38Z] Debug Accepting an incoming connection. [2012-12-12 15:15:38Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 801) [2012-12-12 15:15:38Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_SOFTWARE = lighttpd/1.4.28) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_NAME = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (GATEWAY_INTERFACE = CGI/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PORT = 80) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_PORT = xxx) [2012-12-12 15:15:38Z] Debug Read parameter. (REMOTE_ADDR = xxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_NAME = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (PATH_INFO = ) [2012-12-12 15:15:38Z] Debug Read parameter. (SCRIPT_FILENAME = /data/htdocs/ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (DOCUMENT_ROOT = /data/htdocs) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_URI = /ViewPage1.aspx) [2012-12-12 15:15:38Z] Debug Read parameter. (QUERY_STRING = ) [2012-12-12 15:15:38Z] Debug Read parameter. (REQUEST_METHOD = GET) [2012-12-12 15:15:38Z] Debug Read parameter. (REDIRECT_STATUS = 200) [2012-12-12 15:15:38Z] Debug Read parameter. (SERVER_PROTOCOL = HTTP/1.1) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_HOST = xxxxx) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_CACHE_CONTROL = max-age=0) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip,deflate,sdch) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-US,en;q=0.8) [2012-12-12 15:15:38Z] Debug Read parameter. (HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.3) [2012-12-12 15:15:38Z] Debug Record received. (Type: StandardInput, ID: 1, Length: 0) [2012-12-12 15:15:38Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) lighttpd config: server.modules += ( "mod_fastcgi" ) include "conf.d/mono.conf" $HTTP["host"] !~ "^vdn\." { $HTTP["url"] !~ "\.(jpg|gif|png|js|css|swf|ico|jpeg|mp4|flv|zip|7z|rar|psd|pdf|html|htm)$" { fastcgi.server += ( "" => (( "socket" => mono_shared_dir + "fastcgi-mono-server", "bin-path" => mono_fastcgi_server, "bin-environment" => ( "PATH" => mono_dir + "bin:/bin:/usr/bin:", "LD_LIBRARY_PATH" => mono_dir + "lib:", "MONO_SHARED_DIR" => mono_shared_dir, "MONO_FCGI_LOGLEVELS" => "Debug", "MONO_FCGI_LOGFILE" => mono_shared_dir + "fastcgi.log", "MONO_FCGI_ROOT" => mono_fcgi_root, "MONO_FCGI_APPLICATIONS" => mono_fcgi_applications ), "max-procs" => 1, "check-local" => "disable" )) ) } } the referenced mono.conf index-file.names += ( "index.aspx", "default.aspx" ) var.mono_dir = "/usr/" var.mono_shared_dir = "/tmp/" var.mono_fastcgi_server = mono_dir + "bin/" + "fastcgi-mono-server4" var.mono_fcgi_root = server.document-root var.mono_fcgi_applications = "/:." The document root for this server is /data/htdocs. The asp.net files reside there. lighttpd error logs show nothing. Every help is greatly appreciated!

    Read the article

  • MySQL for Beginners Training-on-Demand First Hand Insight

    - by Antoinette O'Sullivan
    The MySQL for Beginners course is THE course to get you started with MySQL providing you a solid foundation in relational databases using MySQL as a learning tool. Oracle University recently released the Training-on-Demand option for this course.  Ben Krug from the MySQL product team is trying out the MySQL for Beginners Training-on-Demand course and reporting on his experience. You can follow Ben on MySQL Support Blogs. The MySQL for Beginners course is available as: Training-on-Demand: Follow streaming video of instructor delivery and perform hands-on exercises as your own pace. You can start training with 24 hours of purchase. Live-Virtual: Attend a live-instructor led class from your own desk. Hundreds of events on the schedule across timezones. In-Class: Travel to an education center to attend this instructor-led class. Some events on the schedule below:  Location  Date  Delivery Language  Warsaw, Poland  24 September 2012  Polish  Dublin, Ireland  15 October 2012  English  London, United Kingdom  11 September 2012  English  Rome, Italy  5 November 2012  Italian  Hamburg, Germany  3 December 2012  German  Lisbon, Portugal  5 November 2012  European Portugese  Amsterdam, Netherlands  10 December 2012  Dutch  Nieuwegein, Netherlands  18 February 2013  Dutch  Nairobi, Kenya  12 November 2012  English  Barcelona, Spain  5 November 2012  Spanish  Madrid, Spain  8 January 2013  Spanish  Latvia, Riga  12 November 2012  Latvian  Petaling Jaya, Malaysia  22 October 2012  English  Ottawa, Toronto and Montreal Canada  17 December 2012  English  Sao Paulo, Brazil  11 September 2012  Brazilian Portugese  Sao Paulo, Brazil  5 November 2012  Brazilian Portugese  For more information on the Authentic MySQL Curriculum, go to the Oracle University Portal - http://oracle.com/education/mysql

    Read the article

  • Troubleshooting PHP email sending?

    - by darkAsPitch
    I created a website that occasionally emails users when they register/change their password/etc. Every other person however cannot or does not receive the emails. They are telling me that they are not even hitting their spam folders. I don't know a ton about MX records or email sending, but when I "Edit DNS Zone" for this domain in particular there is 1 MX record listed there. How do you go about troubleshooting botched PHP mail actions? UPDATE: Here is my super-simple php mailing code: $subject = "Subject Here"; $message = "Emails Message"; $to = $verified_user_data["email_address"]; $headers = "From: [email protected]\r\n" . "Reply-To: [email protected]\r\n" . "X-Mailer: PHP/" . phpversion(); //returns true on success, false on failure $email_result = mail($to, $subject, $message, $headers); re: "are you saying that some do and some do not?" @ Jacob Yes, basically. I send the emails containing the user's login username/password using similar code above. And I sell to fairly tech-savvy people. About 50% of the time, my customers claim they cannot find their welcome emails in their inbox OR in their spam box. It's as if it never arrived. I have the largest problem with Yahoo email addresses accepting my emails or so it seems. re: "The MX record at your end doesn't factor in, although the SPF record (or lack of it) will. How much access and control do you have on the server itself?" @ John Gardeniers I rent a dedicated server from Codero. Running CentOS 5, WHM + cPanel. I have full root access to the entire thing. Don't know much about MX records and/or SPF records. I just want the PHP mail function to work. It doesn't say much about that on the php mail function's help page. re: "What are you using for the SMTP server?" @ JonLim No idea. I use the code above when I need to fire off an email to a loyal customer, and that's it. Do I need to be worrying about SMTP servers? re: "Could be many, many things. Can you describe how you're sending mail in your code? i.e. are you relaying off of another mail server somewhere, using the local sendmail or postfix? Any consistency in domains that can/cannot receive email? Do you have a PTR record setup from the IP address that you're sending mail out as? What about SPF records?" @ gravyface I just described my simple code above! I believe I have been having the most trouble with Yahoo domains, however "independent" domains (probably running spamassasin) ex. [email protected] as opposed to [email protected] seem to give a lot of trouble as well. I do not know if I have a PTR record setup from the IP address I'm sending my mail from. It's probably the same IP address that I setup my domain on, because I didn't do anything extra special. No idea about SPF records either, where can I go to create one? Side Note: It's a crying shame what havoc the spammers have brought upon our beloved email system.

    Read the article

  • Google Apps e-mail being rejected from some domains

    - by Paul J. Lucas
    I'm migrating e-mail for my domains to Google Apps' e-mail. Most everything seems to work except e-mail sent to any user at (at least) sonic.net is rejected with a message of the form (where any-address has been substituted for my friend's address): From: Mail Delivery Subsystem <[email protected]> Date: March 11, 2010 10:04:48 AM PST To: [email protected] Subject: Delivery Status Notification (Failure) Delivered-To: [email protected] Received: by 10.229.194.26 with SMTP id dw26cs8717qcb; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr3841599fai.62.1268330688325; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Received: by 10.223.68.143 with SMTP id v15mr5119424fai.62; Thu, 11 Mar 2010 10:04:48 -0800 (PST) Mime-Version: 1.0 Return-Path: <> X-Failed-Recipients: [email protected] Message-Id: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.1.1 <[email protected]>... No such user here (state 13). And here are the headers from the message it bounces back: Received: by 10.101.90.7 with SMTP id s7mr2515885anl.176.1267979929490; Sun, 07 Mar 2010 08:38:49 -0800 (PST) Return-Path: <[email protected]> Received: from [10.0.1.203] (adsl-76-201-171-194.dsl.pltn13.sbcglobal.net [76.201.171.194]) by mx.google.com with ESMTPS id 4sm1046550yxd.70.2010.03.07.08.38.48 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sun, 07 Mar 2010 08:38:49 -0800 (PST) From: "Paul J. Lucas" <[email protected]> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Subject: Some fascinating subject Date: Sun, 7 Mar 2010 08:38:46 -0800 References: <[email protected]> To: [email protected] Message-Id: <[email protected]> Mime-Version: 1.0 (Apple Message framework v1077) X-Mailer: Apple Mail (2.1077) However, I am able to send mail to a user at sonic.net using my old e-mail account. Also, my company uses Google Apps for e-mail and I can send e-mail to a user at sonic.net from my company. The differences between my personal e-mail and my company's are: My company's domain has no SPF record whereas mine does. My company's domain has an A record whereas mine does not. My SPF record initially was as prescribed by Google here. However, this guy claims Google is wrong and gives a fix. I've tried it both ways with no difference. My SPF record is currently: v=spf1 mx include:aspmx.googlemail.com include:_spf.google.com ~all As for the lack of an A record, you wouldn't think that a mail host would care about that so long as mx records are defined. However, the funny thing is that if you look at the error message, why does Google state that the recipient's domain stated that there is "No such user here" for my address? That makes no sense. Of course there is no user having my address at sonic.net. Also, I assume that I just discovered that I can't send mail to users at sonic.net by accident and that there are probably other domains I can't send e-mail to. So... anybody have any idea what's going on? And how I can get mail to users at sonic.net?

    Read the article

  • Today's Links (6/17/2011)

    - by Bob Rhubart
    Call for Nominations: Oracle Eco-Enterprise Innovation Awards Is your organization using Oracle products to reduce your environmental footprint while reducing costs? If so, submit your nomination for Oracle's Eco-Enterprise Innovation award. These awards will be presented to select customers and their partners who are using any of Oracle's products to not only take an environmental lead, but also to reduce their costs and improve their business efficiencies by using green business practices. Beyond The Data Grid: Coherence, Normalization, Joins, and Linear Scalability | Ben Stopford Ben Stopford presents ODC, a highly distributed in-memory normalized NoSQL datastore designed for scalability, based on normalized data, Snowflake Schema, and Connected Replication pattern. Upgrading ALSB services to OSB | John Chin-a-Woeng John Chin-a-Woeng walks you through the upgrade from Aqualogic Service Bus (ALSB 3.0) to Oracle Service Bus (OSB 10.3). SOA & Middleware: Pinning tasks to a user in BPM 11g | Niall Commiskey Commiskey illustrates a scenario. JDeveloper 11gR2: New option Test WebService in WSDL editor | Lucas Jellema The "Test WebService" button in the WSDL Editor in JDeveloper 11gr2 is "just a little feature addition," says Oracle ACE Director Lucas Jellema. "But it can be quite useful all the same." Enterprise Business Intelligence 11g Seminar with Mark Rittman Oracle ACE Director Mark Rittman conducts a two-day course for Oracle University, in Dublin, IE, July 4-5, 2011. Data Integration Webcast Series Join Oracle experts for a series covering our data integration solutions. You’ll get invaluable information to help boost your data infrastructure so that you can accelerate your business.

    Read the article

  • Server hard disk read speed and client download speed, is there a connection? [closed]

    - by Mywiki Witwiki
    Ok so a client's download speed is only as fast as a server's upload speed, and vice versa. Based on the answers to this post: Does upload speed depend upon download speed of the server? In other words, the data transfer rate between the two computers is only as fast as the speed of the "bottleneck". Let's pretend the two computers are in two different networks and both have 100Mbps internet connection. Ben wants a copy of a file in Mark's computer hard disk with 30Mbps read speed. Does this mean that Ben can download the file at a speed of around 30Mbps only, despite having an internet connection faster than 30Mbps?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >