Search Results

Search found 924 results on 37 pages for '4kb sector'.

Page 19/37 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to store user preferences? Cookie becomes bigger..

    - by ari
    My application (Asp.Net MVC) has great interaction with the user interface (jQuery/js). For example, setting various searches charts, moving the gadgets on the screen and more .. I of course want to keep all data for each user. So that data will be available from any page in the Dumaine and the user will accepts his preferences. Now I keep all data in a cookie because it did not seem logical asynchronous access to the server each time the user changes something and thet happens a lot.When the user logout from the application I save the cookie to the database. The problem is the cookie becomes very large. The thought that this huge cookie is attached to each server request makes me feel that my attitude is wrong. Another problem cookies have size limit. It varies from your browser but I definitely have been close to the border - my cookie easily become 4kb Is there another solution?

    Read the article

  • Problem with Ruby script output being stored into a file

    - by nickf
    I have a Ruby script that outputs a heap of text. As an example: puts "line 1" puts "line 2" puts "line 3" # etc... (obviously, this isn't how my script works..) There's not a lot of data - perhaps about 8kb of character data in total. When I run the script on the command line, it works as expected: $ ./my-script.rb line 1 line 2 line 3 But, when I push it into a file, the output is truncated at exactly 4096 bytes: $ ./my-script.rb > output.txt What would cause it to stop at 4kb?

    Read the article

  • C#: Efficiently search a large string for occurences of other strings

    - by Jon
    Hi, I'm using C# to continuously search for multiple string "keywords" within large strings, which are = 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) http://tomasp.net/articles/ahocorasick.aspx -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?

    Read the article

  • problem with NSInputStream on real iPhone

    - by ThamThang
    Hi guys, I have a problem with NSInputStream. Here is my code: case NSStreamEventHasBytesAvailable: printf("BYTE AVAILABLE\n"); int len = 0; NSMutableData *data = [[NSMutableData alloc] init]; uint8_t buffer[32768]; if(stream == iStream) { printf("Receiving...\n"); len = [iStream read:buffer maxLength:32768]; [data appendBytes:buffer length:len]; } [iStream close]; I try to read small data and it works perfectly on simulator and real iPhone. If I try to read large data (more than 4kB or maybe 5kB), the real iPhone just can read 2736 bytes and stop. Why is it? Help me plz! Merci d'avance!

    Read the article

  • Leak caused by fread

    - by Jack
    I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed: u8 WorldManager::versionOfMap(FILE *file) { char magic[4]; u8 version; fread(magic, 4, 1, file); <-- this is the line fread(&version,1,1,file); fseek(file, 0, SEEK_SET); return version; } According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one. Is there anything trivial I'm missing? Is it something internal I shouldn't care about? Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).

    Read the article

  • [C++] Is it possible to use threads to speed up file reading ?

    - by Mister Mystère
    Hi there, I want to read a file as fast as possible (40k lines) [Edit : the rest is obsolete]. Edit: Andres Jaan Tack suggested a solution based on one thread per file, and I want to be sure I got this (thus this is the fastest way) : One thread per entry file reads it whole and stocks its content in a container associated (- as many containers as there are entry files) One thread calculates the linear combination of every cell read by the input threads, and stocks the results in the exit container (associated to the output file). One thread writes by block (every 4kB of data, so about 10 lines) the content of the output container. Should I deduce that I must not use m-mapped files (because the program's on standby waiting for the data) ? Thanks aforehand. Sincerely, Mister mystère.

    Read the article

  • Scalable (half-million files) version control system

    - by hashable
    We use SVN for our source-code revision control and are experimenting using it for non-source-code files. We are working with a large set (300-500k) of short (1-4kB) text files that will be updated on a regular basis and need to version control it. We tried using SVN in flat-file mode and it is struggling to handle the first commit (500k files checked in) taking about 36 hours. On a daily basis, we need the system to be able to handle 10k modified files per commit transaction in a short time (<5 min). My questions: Is SVN the right solution for my purpose. The initial speed seems too slow for practical use. If Yes, is there a particular svn server implementation that is fast? (We are currently using the gnu/linux default svn server and command line client.) If No, what are the best f/oss/commercial alternatives Thanks

    Read the article

  • Can the dirtiness of pages of a mmap be found from userspace?

    - by chrisdew
    Can dirtiness of pages of a (non-shared) mmap be accessed from userspace under linux 2.6.30+? Platform-specific hacks and kludges welcome. Ideally, I'm looking for an array of bits, one per page (4kB?) of the mmap'ed region, which are set if that page has been written to since the region was mmap'ed. (I am aware, that the process doing the writing could keep track of this information - but it seems silly to do so if the kernel is doing it anyway.) Thanks, Chris.

    Read the article

  • Image Size for Animation!

    - by taimur-hamza
    Hi Everybody, I m new to iphone app development and i need some help. I have a list of 30 images that i have to animate and display with 0.1 second interval. I put all the images in an Array using this imageletter.animationImages = [NSArray arrayWithObjects:[UIImage imageNamed:@"1.png"], ...... ,nil] And then animate it using these statements, [imageletter setAnimationDuration:16]; [imageletter startAnimating]; [NSTimer scheduledTimerWithTimeInterval:mytime target:self selector:@selector(StopAfterCertainTime) userInfo:nil repeats:NO]; Now the problem is that the size of each image is 8kb , it runs fine on iphone simulator but crashes on device. When i used 30 other images of 4kb each it runs fine both on simulator and device. Can anybody tell me what is the ideal size for such kind of task. Thanks

    Read the article

  • Android App crashing on one device only

    - by Daniel1402
    I am working on a new game that works perfectly on my test devices, 7-inch tablets and smartphones. But it crashes on my Galaxy Tab2 10-inch tablet with an Out of memory error. It always crashes when I start to play a second game! I have spent a full week checking the codes and I cannot figure out what is wrong. When I play from the menu screen, everything works fine. When I want to replay a game level from the level screen, the game will crash on the second launch. The level screen is made of 3 fragments, each with 32 buttons (4kB in size). I tried to keep only one fragment in memory with viewPager.setOffscreenPageLimit(1); but it does not solve the problem. Could someone stir me in some direction as to where to look for the potential problem? Why is the 10-inch tablet the only one to crash? Thanks.

    Read the article

  • Chris Brook-Carter at the Oracle Retail Week Awards VIP Reception

    - by user801960
    The Oracle VIP Reception at the Oracle Retail Week Awards last week saw retail luminaries from around the UK and Europe gather to have a drink and celebrate the successes of retail in the last year. Guests included Lord Harris of Peckham, Tesco's Philip Clarke, Vanessa Gold from Ann Summers, former Retail Week editor Tim Danaher, Richard Pennycook from Morrisons and Ian Cheshire from Kingfisher Group. The new Retail Week editor-in-chief, Chris Brook-Carter, attended and took the time to speak to the guests about the value of the Oracle Retail Week Awards to the industry and to thank Oracle for its dedication to supporting the industry. Chris said: "I'd like to say a real heartfelt thanks to our partner this evening: Oracle. I had the privilege of being at the judging day and I got to meet Sarah and the team and I was struck by not only the passion that they have for the whole awards system and everything that means in terms of rewarding excellence within the retail industry but also their commitment to retail in general, and it's that sort of relationship that marks out retail as such a fantastic sector to be involved in." Chris's speech can be watched in full below:

    Read the article

  • We're Back: I'm Here

    - by Brian Dayton
    After a busy Fall and Winter post-Oracle OpenWorld 2009 Oracle's Application Strategy Blog is back. More on what we've been up to shortly. Me, I'm blogging here for the first time. After nearly 6 years at Oracle working on the Oracle Fusion Middleware business I've recently joined the Oracle Applications team. For me, what's old is new again. Prior to working on applications infrastructure at Oracle...and at BEA Systems before that...I worked at PeopleSoft in a number of roles spanning Enterprise Performance Management, Supply Chain, Public Sector and Financial Services and more. Some of the acronyms are the same, there are (of course) some new ones too. But what I'm really excited about is the intersection of Enterprise Applications and Applications Infrastructure that's happening right now. "Aligning IT with Business Strategy" has been the buzzphrase for longer than we can all remember---but what I've seen over the past 5 months makes me start to believe that it's finally starting to happen.

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • Fix overlapping partitions

    - by Alex
    I have problem with overlapping partitions. GParted shows me all my disk as unallocated area, output of fdisk below: alex@alex-ThinkPad-SL510:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb4b9b90 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2457599 1227776 7 HPFS/NTFS/exFAT /dev/sda2 2457600 571351724 284447062+ 7 HPFS/NTFS/exFAT /dev/sda3 571342846 604661759 16659457 5 Extended /dev/sda4 604661760 625137663 10237952 7 HPFS/NTFS/exFAT /dev/sda5 598650880 604661759 3005440 82 Linux swap / Solaris /dev/sda6 571342848 598650879 13654016 83 Linux Partition table entries are not in disk order Do I understand correctly that overlapping partitions are sda2 and sda3 (sda2 and sda6 overlaps too, because sda6 is the first chunk of sda3, sda3 has type "extended")? Are sda2 and sda3 the cause of problem? How can i fix it without deleting partitions? My OS is Ubuntu 12.04, 64 bit. Thanks in advance.

    Read the article

  • Gartner PCC Summit, Baltimore - Oracle's Take

    - by [email protected]
    Back from last week's trip to the Gartner PCC Summit in Baltimore, Andy MacMillan and Ajay Gandhi share their impressions of the conference. According to Andy and Ajay: Interest in the sector is increasing - attendance at this year's conference was up by more than 50 percent The discussion at the conference this year shifted from a focus on what the tools are to how the tools can transform organizations and help build businesses Conference attendees were interested in taking a platform approach and looking to bring multiple tools together to solve problems and simplify business processes. If you are interested in learning more about the Bureau of Indian Affairs' deployment showcased in Ajay's session at the Gartner PCC Summit, come back soon - a detailed post is on its way.

    Read the article

  • StreamInsight/SSIS Integration White Paper

    - by Roman Schindlauer
    This has been tweeted all over the place, but we still want to give it proper attention here in our blog: SSIS (SQL Server Integration Service) is widely used by today’s customers to transform data from different sources and load into a SQL Server data warehouse or other targets. StreamInsight can process large amount of real-time as well as historical data, making it easy to do temporal and incremental processing.  We have put together a white paper to discuss how to bring StreamInsight and SSIS together and leverage both platforms to get crucial insights faster and easier. From the paper’s abstract: The purpose of this paper is to provide guidance for enriching data integration scenarios by integrating StreamInsight with SQL Server Integration Services. Specifically, we looked at the technical challenges and solutions for such integration, by using a case study based on a customer scenarios in the telecommunications sector. Please take a look at this paper and send us your feedback! Using SQL Server Integration Services and StreamInsight Together Regards, Ping Wang

    Read the article

  • Vouchers grátis para exames de implementação (SOA, E2.0, etc)

    - by pfolgado
    Gostaria de receber 'vouchers' grátis para os exames de Implementação? É fácil! Registe-se numa das Comunidades de Parceiros de EMEA. A maioria destas Comunidades oferecem aos seus membros 'vouchers' grátis para os exames dos produtos cobertos por essascomunidades. Por exemplo, os membros da Comunidade Parceiros de SOA podem obter 'vouchers' grátis para os exames de implementação de SOA e BPM. Para mais informação sobre as comunidades de Parceiros Oracle ver: Tópico Contacto Applications & Systems Management Javier Puerta Business Intelligence & Enterprise Performance Management Mike Hallett Communications Paul Thompson CRM On Demand Paul Thompson Enterprise 2.0 (previously "Content Management") Hans Blaas Exadata Javier Puerta Healthcare Paul Thompson Identity Management & Security Wolfgang Ehrenthaler Manufacturing, Retail, Distribution and Life Science (MRD/LS) Paul Thompson Public Sector Paul Thompson SOA / Integration Jürgen Kress

    Read the article

  • New AutoVue Movies Available at the Oracle AutoVue Channel!

    - by Gerald Fauteux
    There are 4 new movies available at the Oracle AutoVue Channel. Three of these latest AutoVue movies demonstrate how AutoVue can be used in various processes, in the Electronic and High tech  sector. The fourth shows how AutoVue can be used on an iPad using Oracle Virtual Desktop Infrastructure (OVDI) They are: Improving the Design Process with AutoVue in the Electronics & High Tech Industry  Watch it now (7:17)  Improving Manufacturing and Assembly with AutoVue in the Electronics & High Tech Industry Watch it now (7:55)  Improving Supply Chain Management with AutoVue in the Electronics & High Tech Industry Watch it now (4:42)  Mobile Asset Management on the iPad With AutoVue and Oracle Virtual Desktop Infrastructure (OVDI) Watch it now (3:52)  See all the Movies available at the Oracle AutoVue Channel!

    Read the article

  • Exadata a kiskereskedelem (retail) számára

    - by Fekete Zoltán
    Egyik kedvenc blogomban a Rittman Mead honlapján hasznos eloadásra jelent meg infó és letöltési lehetoség. (Lásd a jobb oldali Top Tags dobozban a "blog" kulcsszót, és a legalsó bejegyzést.) Az eloadás címe: Exadata in the Retail Sector, azaz Exadata a felhasználása a kiskereskedelemben. Ezt az eloadást Jon Mead tartotta 2010. március 23-án Londonban az Exadata V2, Oracle Extreme Performance Data Warehousing Seminar rendezvényen. Mint láthatjuk, szinte minden gyümölcsrol beszéltek az Oracle adattárház és üzleti intelligencia virágzó gyümölcsökertjébol az Oracle BI, 11gR2 adattárház tulajdonságai és más témákban. Az eloadások a következo területekrol szóltak: - Exadata techikai ismertetés - ügyfél sztorik: LGR, Allegro, és nagy-britannia egyik legnagyobb online elektronikai kiskereskedelmi cége - Oracle BI - GoldenGate (adatreplikáció) - advanced compression (tranzakciós adatok tömörítése) - particionálás - OLAP - adatbányászat, Oracle Data Mining

    Read the article

  • Oracle E-Business Supply Chain Suite Release 12.1.2: Latest & Greatest!

    - by [email protected]
    This week we hosted one of several planned orientation and training sessions for the ASR/ASM sales community.  The purpose of the session was to orient our contact center and marketing associates with the 'hotpoints' of the latest release and to provide a few 'snippets' for the scheduled 'call-down' to the installed base.  Oracle EBS Release 12.1.2 contains some of the most powerful supply chain applications technology available to the industrial, commercial and public sector communities.  They should all be taking advantage of this great capability to drive margins, control costs and achieve compliance.   In today's changing business landscape, organizations need competitive advantage and we see that R12 provides this capability according to our customers leveraging the upgrade.

    Read the article

  • We're Back: I'm Here

    - by [email protected]
    After a busy Fall and Winter post-Oracle OpenWorld 2009 Oracle's Application Strategy Blog is back. More on what we've been up to shortly. Me, I'm blogging here for the first time. After nearly 6 years at Oracle working on the Oracle Fusion Middleware business I've recently joined the Oracle Applications team. For me, what's old is new again. Prior to working on applications infrastructure at Oracle...and at BEA Systems before that...I worked at PeopleSoft in a number of roles spanning Enterprise Performance Management, Supply Chain, Public Sector and Financial Services and more. Some of the acronyms are the same, there are (of course) some new ones too. But what I'm really excited about is the intersection of Enterprise Applications and Applications Infrastructure that's happening right now. "Aligning IT with Business Strategy" has been the buzzphrase for longer than we can all remember---but what I've seen over the past 5 months makes me start to believe that it's finally starting to happen.

    Read the article

  • Oracle Applications Day 2012 -Experience the Global Innovation of Management Applications

    - by antonella.buonagurio
    Il 10 ottobre a Milano e il 17 ottobre a Roma si sono svolti gli Oracle Applications Day, dedicati alla community di Clienti e Partner Oracle. Le due giornate hanno visto la partecipazione di più di 400 persone che hanno condiviso le loro esperienze e conoscenze in ambito applicativo. Durante la sessione plenaria sono state illustrate tutte le novità relative alle Oracle Applications ed in particolare le Oracle Fusion Applications mentre durante le 2 giornate più di 20 clienti hanno parlato di come utilizzano in modo strategico e con successo le soluzioni Oracle. 15 Business Partner grazie all'iniziativa "Partner Instant Workshop" hanno incontrato direttamente i clienti e discusso delle tematiche più calde del momento. Se non hai potuto partecipare all'evento oppure vuoi rivivere quei momenti qui sotto trovi la presentazione della plenaria mentre cliccando su ciascun titolo delle sessioni parallele puoi trovare le rispettive presentazioni. Innovation for Human Resources Performance Management Excellence Empower Applications with Technology (tenutasi solo a Milano) Applications for Public Sector (tenutasi solo a Roma) Next Generation Global Operations Customer Experience Revolution

    Read the article

  • ubuntu desktop installation problem, a lot of error messages

    - by veerendar
    Hi I am getting error messages while trying to install ubuntu 12.04 on my desktop(intel pentium D). and the error messages are: *checking battery state... [412.633532] end_request:I/O error .dev sr0, sector 1291684 [435.997503] SQUASHFS error: squashfs_readdata failed to readbloack 0x275fcda4 [435.9975xx] SQUASHFS error: unable to read fragment cache entry page [275fcda4] . . [524.000055] exception Emask 0x0 SAct 0x0SErr 0x0 action 0x6 frozen ata5.00 : cmd a0/00:00:00:08:00/00:00:00/00 tag0 pio 16392 in res 58 /00:02:00:08:00/00/a0 Emask 0xf(timeout) [524.000292] ata5.00 status : {DRDY DRQ} I was able to install on another pc(intel atom) but not on this pc(intel pentium D), Can any one help me in successful installation. Thanks!

    Read the article

  • How to list missing partitions?

    - by celebrimbor
    I have installed Ubuntu on one of my partition and Crunchbang on the other partition. As I wanted to make some continuous space, I moved Crunchbang partition and then checked fdisk output which looks like this Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc7996dfa Device Boot Start End Blocks Id System /dev/sda1 * 63 80324 40131 de Dell Utility /dev/sda4 81918 625139711 312528897 f W95 Ext'd (LBA) /dev/sda5 81920 211816447 105867264 83 Linux /dev/sda6 299100160 341043199 20971520 83 Linux /dev/sda7 341045248 625139711 142047232 7 HPFS/NTFS/exFAT I cannot see sda2 and sda3 partition. How to find them?

    Read the article

  • How to design database having multiple interrelated entities

    - by Sharath Chandra
    I am designing a new system which is more of a help system for core applications in banks or healthcare sector. Given the nature of the system this is not a heavy transaction oriented system but more of read intensive. Now within this application I have multiple entities which are related to each other. For e.g. Assume the following entities in the system User Training Regulations Now each of these entities have M:N Relationship with each other. Assuming the usage of a standard RDBMS, the design may involve many relationship tables each containing the relationships one other entity ("User_Training", "User_Regulations", "Training_Regulations"). This design is limiting since I have more than 3 entities in the system and maintaining the relationship graph is difficult this way. The most frequently used operation is "given an entity get me all the related entities" . I need to design the database where this operation is relatively inexpensive. What are the different recommendations for modelling this kind of database.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >