Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 21/361 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • best cloud storage + rsnapshot

    - by humbledude
    I’ve started using rsnapshot as my backup system for home PC. I really like the idea of hard links and how they are handled. But can’t find best workflow. Currently I keep my snapshots on the same partition and let’s say, copy newest one to a pendrive at the end of the week. Cloud storage is what I’m looking for. As of rsnapshot, Dropbox doesn’t fit my needs. More over there is no way to make it respect hard links — all snapshots are treated as a full snapshot. Renting a server is pretty expensive so my question is, are there better alternatives for backup in the cloud? I would like to benefit from hard links and send only incremental backups, just like in my local host.

    Read the article

  • secure offline PC storage accessible through javascript

    - by turbo2oh
    I'm attempting to build a browser-based HTML5 application that has the ability to store data locally on a PC (not mobile device) when offline. This data is sensitive and must be secure. Of course the trick is trying to find a way to be able to access the secure data with Javascript. I've ruled out browser local storage since its not secure. Could this be accomplished with a local database? If so, where could the DB credentials be stored? Javascript obviously doesn't seem like a good option to store them since its user-readable.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Isolated storage misunderstand

    - by Costa
    Hi this is a discussion between me and me to understand isolated storage issue. can you help me to convince me about isolated storage!! This is a code written in windows form app (reader) that read the isolated storage of another win form app (writer) which is signed. where is the security if the reader can read the writer's file, I thought only signed code can access the file! If all .Net applications born equal and have all permissions to access Isolated storage, where is the security then? If I can install and run Exe from isolated storage, why I don't install a virus and run it, I am trusted to access this area. but the virus or what ever will not be trusted to access the rest of file system, it only can access the memory, and this is dangerous enough. I cannot see any difference between using app data folder to save the state and using isolated storage except a long nasty path!! I want to try give low trust to Reader code and retest, but they said "Isolated storage is actually created for giving low trusted application the right to save its state". Reader code: private void button1_Click(object sender, EventArgs e) { String path = @"C:\Documents and Settings\All Users\Application Data\IsolatedStorage\efv5cmbz.ewt\2ehuny0c.qvv\StrongName.5v3airc2lkv0onfrhsm2h3uiio35oarw\AssemFiles\toto12\ABC.txt"; StreamReader reader = new StreamReader(path); var test = reader.ReadLine(); reader.Close(); } Writer: private void button1_Click(object sender, EventArgs e) { IsolatedStorageFile isolatedFile = IsolatedStorageFile.GetMachineStoreForAssembly(); isolatedFile.CreateDirectory("toto12"); IsolatedStorageFileStream isolatedStorage = new IsolatedStorageFileStream(@"toto12\ABC.txt", System.IO.FileMode.Create, isolatedFile); StreamWriter writer = new StreamWriter(isolatedStorage); writer.WriteLine("Ana 2akol we ashrab kai a3eesh wa akbora"); writer.Close(); writer.Dispose(); }

    Read the article

  • The internal storage of a DATETIME2 value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare @now datetime2(7) = '2010-12-15 21:04:03.6934231'   select  cast(cast(@now as datetime2(0)) as binary(7)),         cast(cast(@now as datetime2(1)) as binary(7)),         cast(cast(@now as datetime2(2)) as binary(7)),         cast(cast(@now as datetime2(3)) as binary(8)),         cast(cast(@now as datetime2(4)) as binary(8)),         cast(cast(@now as datetime2(5)) as binary(9)),         cast(cast(@now as datetime2(6)) as binary(9)),         cast(cast(@now as datetime2(7)) as binary(9)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Original value ------  ----------  ------------  ------  ------  -------------------- 0x  00  442801             75844  A8330B  734120  0x00442801A8330B 0x  01  A5920B            758437  A8330B  734120  0x01A5920BA8330B  0x  02  71BA73           7584369  A8330B  734120  0x0271BA73A8330B 0x  03  6D488504        75843693  A8330B  734120  0x036D488504A8330B 0x  04  46D4342D       758436934  A8330B  734120  0x0446D4342DA8330B 0x  05  BE4A10C401    7584369342  A8330B  734120  0x05BE4A10C401A8330B 0x  06  6FEBA2A811   75843693423  A8330B  734120  0x066FEBA2A811A8330B 0x  07  57325D96B0  758436934231  A8330B  734120  0x0757325D96B0A8330B Let us use the following color schema Red - Prefix Green - Time part Blue - Day part What you can see is that the date part is equal in all cases, which makes sense since the precision doesm't affect the datepart. What would have been fun, is datetime2(negative) just like round accepts a negative value. -1 would mean rounding to 10 second, -2 rounding to minute, -3 rounding to 10 minutes, -4 rounding to hour and finally -5 rounding to 10 hour. -5 is pretty useless, but if you extend this thinking to -6, -7 and so on, you could actually get a datetime2 value which is accurate to the month only. Well, enough ranting about this. Let's get back to the table above. If you add 75844 second to midnight, you get 21:04:04, which is exactly what you got in the select statement above. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • SQLBeat Podcast – Episode 4 – Mark Rasmussen on Machine Guns,Jelly Fish and SQL Storage Engine

    - by SQLBeat
    In this this 4th SQLBeat Podcast I talk with fellow Dane Mark Rasmussen on SQL, machine guns and jelly fish fights; apparently they are common in our homeland. Who am I kidding, I am not Danish, but I try to be in this podcast. Also, we exchange knowledge on SQL Server storage engine particulars as well as some other “internals” like password hashes and contained databases. And then it just gets weird and awesome. There is lots of background noise from people who did not realize we were recording. And I call them out and make fun of them as they deserve; well just one person who is well known in these parts. I also learn the correct (almost) pronunciation of “fjord”. Seriously, a word with an “F” followed by a “J”. And there are always the hippies and hipsters to discuss. Should be fun.

    Read the article

  • The internal storage of a DATETIMEOFFSET value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare    @now datetimeoffset(7) = '2010-12-15 21:04:03.6934231 +03:30'   select     cast(cast(@now as datetimeoffset(0)) as binary(9)),            cast(cast(@now as datetimeoffset(1)) as binary(9)),            cast(cast(@now as datetimeoffset(2)) as binary(9)),            cast(cast(@now as datetimeoffset(3)) as binary(10)),            cast(cast(@now as datetimeoffset(4)) as binary(10)),            cast(cast(@now as datetimeoffset(5)) as binary(11)),            cast(cast(@now as datetimeoffset(6)) as binary(11)),            cast(cast(@now as datetimeoffset(7)) as binary(11)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Suffix  Suffix  Original value ------  ----------  ------------  ------  ------  ------  ------  ------------------------ 0x  00  0CF700             63244  A8330B  734120  D200       210  0x000CF700A8330BD200 0x  01  75A609            632437  A8330B  734120  D200       210 0x0175A609A8330BD200 0x  02  918060           6324369  A8330B  734120  D200       210  0x02918060A8330BD200 0x  03  AD05C503        63243693  A8330B  734120  D200       210  0x03AD05C503A8330BD200 0x  04  C638B225       632502470  A8330B  734120  D200       210  0x04C638B225A8330BD200 0x  05  BE37F67801    6324369342  A8330B  734120  D200       210  0x05BE37F67801A8330BD200 0x  06  6F2D9EB90E   63243693423  A8330B  734120  D200       210  0x066F2D9EB90EA8330BD200 0x  07  57C62D4093  632436934231  A8330B  734120  D200       210  0x0757C62D4093A8330BD200 Let us use the following color schema Red - Prefix Green - Time part Blue - Day part Purple - UTC offset What you can see is that the date part is equal in all cases, which makes sense since the precision doesn't affect the datepart. If you add 63244 seconds to midnight, you get 17:34:04, which is the correct UTC time. So what is stored is the UTC time and the local time can be found by adding "utc offset" minutes. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • Strategy for backwards compatibility of persistent storage

    - by Baqueta
    In my experience, trying to ensure that new versions of an application retain compatibility with data storage from previous versions can often be a painful process. What I currently do is to save a version number for each 'unit' of data (be it a file, database row/table, or whatever) and ensure that the version number gets updated each time the data changes in some way. I also create methods to convert from v1 to v2, v2 to v3, and so on. That way, if I'm at v7 and I encounter a v3 file, I can do v3-v4-v5-v6-v7. So far this approach seems to be working out well, but I haven't had to make use of it extensively yet so there may be unforseen problems. I'm also concerned that if the objects I'm loading change significantly, I'll either have to keep around old versions of the classes or face updating all my conversion methods to handle the new class definition. Is my approach sound? Are there other/better approaches I could be using? Are there any design patterns applicable to this problem?

    Read the article

  • Product Support News for Oracle Solaris, Systems, and Storage

    - by user12244613
    Hi System Support Customers, April Newsletter is now available The April, 2012 Newsletter for Oracle Solaris, Systems, and Storage is now available via document 1363390.1 *Requires a My Oracle Support account to access. Please take a few minutes to read the newsletter. The newsletter is the primary method of communication about what we in support would like you to be aware of. If you are not receiving the newsletter, it could be due to: (a) Your Oracle profile does not have the allow Oracle Communication selected (on oracle.com Sign In, or if logged in select "Account" and under your Job Role, check you have selected this box : [ ] Yes, send me e-mails in Oracle Products.... (b) you have not logged a service request during the last 12 months. Oracle is working to improve the distribution process and changes are coming and once they are ready I will write more about that. But today if you don't automatically receive the newsletter all you can do is save it as a favorite within My Oracle Support and come back on the 2nd of each month to check out the changes. This month I am really interested to find out from you is the Newsletter providing you the type of items that you are interested in. To gather some data on that, I have a small 2minute survey running on the newsletter or you can access it [ here ] Finally, if you think I am missing a topic in the Newsletter, let me know by taking the survey or suggesting a topic via this blog. Get Proactive Don't forget about being Proactive. The latest updates for Systems and Solaris pages in the Get Proactive area are now available. Check out document 432.1 and learn what proactive features are available for Systems and Solaris.

    Read the article

  • Storage of leftover values in a situation of having to round down

    - by jt0dd
    I'm writing an app (client and server side) where the number of sales required by each employee must be kept track of in round-number form. Each month, the employees are required to sell a certain number, and this app needs to keep track of how many sales must be made for each 12 hour interval during the work week. Because I have to round the values down to a whole number, I must keep track of leftovers in the rounding process and ensure that they are always carried over. My method must ensure the storage of the leftover value even when client and server side crash, restart, close, etc. Right now, I'm working on doing this by storing the leftovers in a field in the user's account row in the database each time a value is rounded, reading the stored value, removing any portion that is used (when a whole number is reached, most of the leftover is used up), and storing the new value. This practice seems weird because while the leftovers are calculated on the client side, it's the same number for each employee, and every employee using the app is storing a copy of the same leftover data. Alternatively, I could have all clients store the data at once into the same data field on a general table, but this is just as weird. Is there a better way that this can be handled or is my method correct?

    Read the article

  • Most efficient 3d depth sorting for isometric 3d in AS3?

    - by AttackingHobo
    I am not using the built in 3d MovieClips, and I am storing the 3d location my way. I have read a few different articles on sorting depths, but most of them seem in efficient. I had a really efficient way to do it in AS2, but it was really hacky, and I am guessing there are more efficient ways that do not rely on possibly unreliable hacks. What is the most efficient way to sort display depths using AS3 with Z depths I already have?

    Read the article

  • Where to store things like user pictures using Azure? Blob Storage?

    - by n26
    I have just migrated a project of mine for test cases to Microsoft's azure. But for functionalities similar to an avatar upload I need write access to the files on the harddrive. But this is a cloud, so this is not possible. How can I build such functionalities instead? Should I use the Blob Storage or is there a better solution? Does it make sense to store all website images (f.e. layout images) in the Blob Storage? So I would have a Cookie-free Domain for my static content?

    Read the article

  • Is it possible to reference a file stored in Isolated Storage by its URI?

    - by Joel
    Using this previous question as motivation, I would like to temporarily store images and videos in Isolated Storage. My application (written in WPF/C#) will allow a user to review these temporarily stored items by viewing their contents in a MediaElement. I was hoping to set the MediaElement's Source Property to a video or image's URI stored in IsolatedStorage but I cannot figure out how to dynamically create a URI since it doesn't appear to be naively supported by IsolatedStorage. Any help would be greatly appreciated - thank you in advance! Update - 1/21/09 After battling the issue for a day, I concluded that the Isolated Storage approach is not practical for storing large video files that need to be referenced by a Uri.

    Read the article

  • Are there any USB flash drives or SD cards which use RAID or redundant storage for additional reliability?

    - by Luke Dennis
    I'm looking to get a fault-tolerant USB flash drive, which saves data to multiple independent locations, whether using RAID or some other means to back up data. Has a product like this ever been created, or are my only options to hack something together? (By the way: I'm aware that RAID doesn't prevent data corruption from software or the file system. I'm just looking for something that can handle one of the memory sticks going dead.)

    Read the article

  • Using Google Docs as a Storage System like S3. Is there any limit?

    - by mickthomp
    Hi all, I'm considering to upgrade to a Google Docs Premium Account (gDrive)? I'm wondering if that can be used as I'm using Amazon S3 at the moment. I'd like to upload images. Do you know if there is there any limit on the number of images I can upload on my 200GB Google Docs account? I think it could be really useful to have something like that and we could save some money on our webapps. Thank you ;)

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • STOP PRESS: FY15 Q1 Oracle ZS3 Contest for Partners

    - by Cinzia Mascanzoni
    04 JUNE 2014 Oracle EMEA Partners Stop Press Stay Connected Oracle Media Network   OPN on PartnerCast   STOP PRESS: FY15 Q1 Oracle ZS3 Contest for PartnersShare an unforgettable experience at the Teatro Alla Scala in Milan Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above(2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Oracle shall be the final arbiter in selecting the winners. All winners will be notified via their Oracle account manager. Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Access to the community workspace requires membership. If you are not a member please register here. The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Good Luck!! For more information, please contact Sasan Moaveni. Regards, Olivier TordoSenior Director - Systems Business DevelopmentOracle EMEA Alliances & Channels Resources EMEA Hardware Partner Community EMEA Oracle Partner Days Find Partner Events EMEA Partner News Blog EMEA Partner Enablement Blog Oracle PartnerNetwork Copyright © 2014, Oracle and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Hyper-V 2012 and P2000 SAS SAN

    - by user155950
    Hi I am having major problems setting up a Hyper-V 2012 cluster on a P2000 SAS SAN. Running System Center VMM 2012 SP1 I am unable to see any storage to create my cluster. Has anyone had experienced anything similar? Under fabric and storage I can't add the P2000, all I can do is use storage spaces in server manager to create a storage pool and virtual disk. This allows me to create a file share which I can add to VMM but I still can't see any disk to create a cluster. I am just about at the point where I want to tear my hair out wipe the servers and stick VMware on them because I know it works as I have set several systems up like this in the past. The Hyper-V servers can see the storage and in server manager on my management machine it seems to know both servers can see the same disk. VMM is running on the same machine and it can't see any disk. Help..... Thanks Mike

    Read the article

  • Oracle ZS3 Contest for Partners: Share an unforgettable experience at the Teatro Alla Scala in Milan

    - by Claudia Caramelli-Oracle
    12.00 Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation  Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA  The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Oracle shall be the final arbiter in selecting the winners and all winners will be notified via their Oracle account manager.Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Remember: access to the community workspace requires membership. If you are not a member please register here. Good Luck!! For more information, please contact Sasan Moaveni. (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above (2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • MongoDB efficient dealing with embedded documents

    - by Sebastian Nowak
    I have serious trouble finding anything useful in Mongo documentation about dealing with embedded documents. Let's say I have a following schema: { _id: ObjectId, ... data: [ { _childId: ObjectId // let's use custom name so we can distinguish them ... } ] } What's the most efficient way to remove everything inside data for particular _id? What's the most efficient way to remove embedded document with particular _childId inside given _id? What's the performance here, can _childId be indexed in order to achieve logarithmic (or similar) complexity instead of linear lookup? If so, how? What's the most efficient way to insert a lot of (let's say a 1000) documents into data for given _id? And like above, can we get O(n log n) or similar complexity with proper indexing? What's the most efficient way to get the count of documents inside data for given _id?

    Read the article

  • Most efficient Way to setup a game server

    - by alex bowers
    I'm running a PHP based game which has over 45 Million members predicted for end of this year (2011) Currently we are on 7.5 Million, this game is being ran on facebook and I am in desperate need to help get this game server as efficient and as powerful as possible. it is a dedicated server with Processor Manufacturer Intel Model i7 920 Frequency 4x 2x 2.66 GHz NIC GigaEthernet RAM 12 GB Hard disk 4 x 1 TB specs. It has apache installed, cPanel, phpMyAdmin, several apache mods and MySQL. The game also runs 47 mysql calls per second per user. Is there any alternatives to the above which could be faster, more efficient etc? I dont mind having to recode the game to fit to it, as long as it maximises our upper limit of members on the game. Thanks Also, is there a way to tell what our maximum limit to players, database calls etc is? Thank you again, hope you guys can help :)

    Read the article

  • An efficient way to store view counts for objects?

    - by Nick Brooks
    I maintain an application where users are able to store images, and then share them. The system is powered by MongoDB at the back end. Most of the image depiction pages are cached as flat HTML files, but I can run some code just before loading the file. I've decided to implement a view count for the system. I am wondering what is the best storage place for that. It should be like Memcached but it should save the viewcounts every hour or so, so even if our server has to be restarted we won't lose the view counts. What is the best solution for that (preferably with a PHP extension as a client)?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >