Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 235/677 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • Viewport.Unproject - Checking if a model intersects a large sprite

    - by Fibericon
    Let's say I have a sprite, drawn like this: spriteBatch.Draw(levelCannons[i].texture, levelCannons[i].position, null, alpha, levelCannons[i].rotation, Vector2.Zero, scale, SpriteEffects.None, 0); Picture levelCannon as being a laser beam that goes across the entire screen. I need to see if my 3d model intersects with the screen space inhabited by the sprite. I managed to dig up Viewport.Unproject, but that seems to only be useful when dealing with a single point in 2d space, rather than an area. What can I do in my case?

    Read the article

  • Visual Studio Editor Choosing System

    This document gives an overview of how the Visual Studio editor choosing system works, and as an example discusses the XML Editors choosing system.  Visual Studio has the ability to associate multiple editors with a single a file extension.  For instance, .xaml files have multiple editor implementations associated with them. This raises the question of how Visual Studio chooses a specific editor implementation when asked to open a file. his document gives an overview of how the Visual Studio...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Visual Studio Editor Choosing System

    This document gives an overview of how the Visual Studio editor choosing system works, and as an example discusses the XML Editors choosing system.  Visual Studio has the ability to associate multiple editors with a single a file extension.  For instance, .xaml files have multiple editor implementations associated with them. This raises the question of how Visual Studio chooses a specific editor implementation when asked to open a file. his document gives an overview of how the Visual Studio...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

  • Bandwidth heavy site... use co-location?

    - by darron
    I'm working on a web site that is likely to be very bandwidth-heavy. A major feature of the site when in active use can pull up to 1Mbps for a single session. Luckily, once users get over the new toy factor, the use of this feature will probably be 1-5% or less (probably much less) of session time. However, new users are likely to play with this feature a good bit, especially at launch. I'm very concerned about bandwidth use. This is more or less a niche market, so I won't ever be needing to scale to crazy levels like YouTube. However, it is entirely possible for it to be a couple terabytes/month. Is co-location my best option? What cheap bandwidth services (colocation/hosted/cloud/whatever) are out there?

    Read the article

  • Provincial Forum & the Best of Oracle OpenWorld for Public Sector

    - by user511693
              Provincial Ministries, Crowns and Agencies are transforming in an effort to meet increasing service expectations from citizens, legislative mandates, and current economic pressures. There is a need to be more efficient and accountable, providing services and information to constituents expeditiously and cost-effectively. However, legacy information systems typically support single program functions. These disparate systems pose a complex canvas upon which to compose a more efficient government systems landscape. Please join your fellow government leaders and Oracle on December 6, 2011 to discuss these challenges and learn how government agencies are leveraging IT as a core tool to streamline multi-organization operations thereby delivering a more cost-effective, citizen- centric, and sustainable government. Register here.

    Read the article

  • Want to make jar,war,ear files using apache ANT and use hudson for automated build process [closed]

    - by user1314506
    I want to make build.xml for following all task and i want to set up jenkins or Hudson for Continuous Integration How should i make build file using apache Ant and how to build all projects using single build file? mkdir MyProjectsjar Compile following project and create jar file javaproject1 package1 javafile1 javafile2 javaproject2 package1 javafiles package2 javafiles javaproject3 package1 javafiles javaproject4 package1 javaproject5 package1 javafiles package2 javafiles javaproject6 package1 javaproject7 package1 javafiles javaproject8 package1 javafiles javaproject9 package1 javafiles package2 javafiles javaproject10 package1 javafiles package2 javafiles javaproject11 package1 javafiles package2 javafiles javaproject12 package1 javafiles package2 javafiles javaproject13 package1 javafiles package2 javafiles javaproject14 package1 javafiles package2 javafiles javaproject15 package1 javafiles package2 javafiles javaproject16 package1 javafiles package2 javafiles javaproject17 package1 javafiles package2 javafiles Copy the above jar files into the folder created in step 1 Compile EJB projects and Create EAR project Compile web projects and other all project and create WAR files copy EAR and WAR files to jboss/default/deploy folder.

    Read the article

  • URIs and Resource vs Resource representation

    - by bckpwrld
    URL is an URI which identifies a resource by location. Resource representation is a view of resource's state. This view is encoded in one or more transferable formats, such as XHTML, Atom, XML, MP3 ... URIs associate resource representations with their resources a) So I assume URI identifies a resource and not resource representation? b) I've read that relationship between an URI and resource representation is one to many. Assuming we're talking about URL, how can a single URL address more than one resource representation? thank you

    Read the article

  • Is it time to deprecate synchronized, wait and notify?

    - by OldCurmudgeon
    Is there a single scenario (other than compatibility with ancient JVMs) where using synchronized is preferable to using a Lock? Can anyone justify using wait or notify over the newer systems? Is there any algorithm that must use one of them in its implementation? I see a previous questions that touched on this matter but I would like to take this a little further and actually deprecate them. There are far too many traps and pitfalls and caveats with them that have been ironed out with the new facilities. I just feel it may soon be time to mark them obsolete.

    Read the article

  • Wordpress Multisite Network installation and dev questions

    - by Daitya
    Please go easy on me. I'm a clutzy dinosaur. I currently have a large, unwieldy website hand-coded in html/css with php includes. It currently has a single WP installation in a subdirectory. The plan is to reorganize, and I want to use WP as the CMS and incorporate 3 WP blogs for 3 subdomains. Ideally, would like to create a WP multisite network to allow for further expansion and to save admin trouble. I just want to confirm that if I install WP in the root directory and create 3 blogs (in subdomains), does this mean my website's home page is the mother blog's index.php? Essentially, I will have created 4 blogs - mother at root and 3 children in subdomains? How to set this up on my Mac (OSX 10.5.8) running MAMP for development? And then how to migrate to server without breaking?

    Read the article

  • Can I copy large files faster without using the file cache?

    - by Veazer
    After adding the preload package, my applications seem to speed up but if I copy a large file, the file cache grows by more than double the size of the file. By transferring a single 3-4 GB virtualbox image or video file to an external drive, this huge cache seems to remove all the preloaded applications from memory, leading to increased load times and general performance drops. Is there a way to copy large, multi-gigabyte files without caching them (i.e. bypassing the file cache)? Or a way to whitelist or blacklist specific folders from being cached?

    Read the article

  • Using an external hard drive as a server and be able to connect via wifi [on hold]

    - by user289228
    OK, so i have an old external HDD(seagate 1tb) and a windows computer just in case but im trying to set it up as a server for my home at the moment but if everything goes right i want to transfer it to being a database hub for the business i work for that way i can have more then one register at a time on the same basic database. The thing is im new to the whole server part along with im not well versed with ubuntu either so what im getting at is if i can get it setup, can i connect it on ubuntu and run a linux based Point of sales program but have multiple linked to the single server? i may be able to do it via the router, i think its a belkin n600 with usb port but at this moment i dont know what it is since im not home. i just need information if this is possible and if so, a guide would be appreciated.

    Read the article

  • How do I download a corrupted package again?

    - by user64720
    Ubuntu 12.04 can't install Firefox 13 update, because the package is corrupted. While attempting to install, returns this error (I translated it from my language to English). /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb W: Waited for dpkg --assert-multi-arch but was not there - dpkgGo (10: There are no "child" processes). I can tell that the package at /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb is corrupted, but even as admin, I can't delete it in order to be downloaded again. How should I proceed? EDIT: There was a single package causing this conflict, please report here to understand all the situation: Why can't I install from software center?

    Read the article

  • Set Position of multiple bodies

    - by philipp
    I have a character composed of five bodies which are tied together by a lot of joints. On of them is the overall chassis, to which all forces and impulses are applied to move the whole Character. All in all that works very fine, except one thing: I need to set the Position of the Character so that it get Beamed from one place to the other in one single frame. Unfortunately I cannot get this to work. I tried the following code, without any success… playerbodies.forEach(function (bd) { bd.SetLinearVelocity(new b2.Vec2()); var t = bd.GetTransform(); t.p.x -= 10; bd.SetTransform(t, bd.GetAngle()); }); How can I make that happen?

    Read the article

  • Installing RVM on 11.10

    - by Guided33
    I have been trying to get RVM properly installed on my system for 10 hours. The problem is, that when ever I run the command to download the install script I get this: edu@edu-VirtualBox:~$ bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer) mkdir: cannot create directory `/usr/share/ruby-rvm': Permission denied If I run the command with sudo, I can get it installed, but then that leads to a whole host of other issues. Every tutorial I read says that you should not be installing rvm with sudo for a single user install. Why can I seem to get it installed without running sudo?

    Read the article

  • Service Layer - how broad should it be, and should it also be used from the local application?

    - by BornToCode
    The background: I need to build a desktop application with some operations (CRUD and more) (=winforms), I need to make another application which will re-use some of the functions of the main application (=webforms). I'm using service layer for reusing my functions. The service is calling the functions on the BL layer (correct me if I'm doing this wrong). so my desktop has 4 projects - DAL, BL, UI, WEBSERVICES. The dilemma (simple but I still need some more experienced opinions): In my main winform UI - should I call the functions from the BL - bl.getcustomers(), or do it similar to how I call it in the webform, and call the functions from the service - webservices.getcustomers? Should I create a service for every single function on the BL even if I need some of the functions only in one UI? for example - should I create services for all the CRUD operations, even though I need to re-use only update operation in the webform? YOUR HELP IS MUCH APPRECIATED

    Read the article

  • Do you have Reconciliation Problems in Procurement between the Subledger and GL?

    - by LuciaC
    We are happy to announce the New Accrual Reconciliation Diagnostic & Troubleshooting Guide provided in Doc ID 1478292.1.  The Accrual Diagnostics script is designed to run when there is a reconciliation issue between subledger and GL and provides a user friendly report .  It was created to allow customers to run a single script to retrieve all data from various tables instead of having to run individual scripts.  Doc ID 1478292.1 guides you through downloading and running the script, includes a full sample output in the attachments and gives steps for troubleshooting based on the report output. We welcome your feedback for improvement of the Diagnostic. After visiting the note, click on the +/- icon in the note (shown in the sceenshot above) and provide us with your valuable comments!

    Read the article

  • Showing "Failed" for a SharePoint 2010 Timer Job Status

    - by Damon
    I have been working with a bunch of custom timer jobs for last month.  Basically, I'm processing a bunch of SharePoint items from the timer job and since I don't want the job failing because of an error on one item, so I'm handing errors on an item-by-item basis and just continuing on with the next item.  The net result of this, I soon found, is that my timer job actually says it ran successfully even if every single item fails.  So I figured I would just set the "Failed" status on the timer job is anything went wrong so an administrator could see that not all was well. However, I quickly found that there is no way to set a timer job status.  If you want the status to show up as "Failed" then the only way to do it is to throw an exception.  In my case, I just used a flag to store whether or not an error had occurred, and if so the the timer job throws a an exception just before existing to let the status display correctly.

    Read the article

  • How do I create a game that runs on Windows, iOS and Android?

    - by AspaApps
    I use C++ to create windows games and now I want to step into another other OS like Android or iOS. I'm totally familiar with C++ so I tried to create app for iOS using objective C it was working awesome. However, I also want to publish games for Android but not by using Java. I don't want to create a single game 5-6 times for other platforms. Is their any way that if I create game for Windows then it will work in Android and iOS ? Or should I use Action Script 3.0? If I use action script 3.0, will it require Flash player to run the game in Windows, Android, iOS?

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Problem when texturing triangles using glVertexPointer()

    - by tigrou
    I'm having a problem for displaying a single quad, here is how i do : float tex_coord[] = { 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0 }; //how many coords should i give ? int indexes[] = { 3, 2, 0, 0, 1, 3 } float vertexes[] = { -37, 0, 30, -38, 0, 29, -41, 0, 32, -42, 0, 31 } glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, vertexes); glTexCoordPointer(2, GL_FLOAT, 0, tex_coord); glDrawElements(GL_TRIANGLES, 2, GL_UNSIGNED_INT, indices); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); The result : (with 2 triangles) (with 4 triangles)

    Read the article

  • Ubuntu 11.10 shut down stuck

    - by Jack Mayerz
    When I shut down it is always stuck on the shut down screen where it displays the ubuntu logo and the little dots. I tried to shut it down through shell, I checked the Init. process, shell and everything. I can't find out where the problem is!! I tried to shut down through terminal session and still the same problem. It's really annoying and I have to shut down with power button every single time. Anyone got a solution?

    Read the article

  • Fixing a bug while working on a different part of the code base

    - by imgx64
    This happened at least once to me. I'm working on some part of the code base and find a small bug in a different part, and the bug stops me from completing what I'm currently trying to do. Fixing the bug could be as simple as changing a single statement. What do you do in that situation? Fix the bug and commit it together with your current work Save your current work elsewhere, fix the bug in a separate commit, then continue your work [1] Continue what you're supposed to do, commit the code (even if it breaks the build fails some tests), then fix the bug (and the build make tests pass) in a separate commit [1] In practice, this would mean: clone the original repository elsewhere, fix the bug, commit/push the changes, pull the commit to the repository you're working on, merge the changes, and continue your work. Edit: I changed number three to reflect what I really meant.

    Read the article

  • Getting Started Quickly

    - by Owen Allen
    If you're interested in using Ops Center, you'll want to get up and running as quickly and effectively as possible. One way to do this would be to work your way through the documentation library - use the Linux or Oracle Solaris install guides, then go through the Feature Guide and Admin Guide to start using the software. They're thorough, but they're a lot of reading. But if you're looking to install a simple deployment quickly, and you don't want to do all of the configuration work right off the bat, you can use the Quick Start Guide. It's a streamlined procedure that runs you through installing a single Enterprise Controller and co-located Proxy Controller, and then shows you how to discover assets quickly. Once you've discovered these assets, it describes how to use the analytics feature to view their performance, and use monitoring to keep track of their statuses and health. You'll have to do some additional configuration to use features like OS provisioning, OS updates, and virtualization, but the Quick Start guide gives you an overview of how to install and start using features quickly.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >