Search Results

Search found 64711 results on 2589 pages for 'core data'.

Page 482/2589 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Architecture design with MyBatis mappers

    - by Wolf
    I am creating rest web service for providing data. I am using Spring MVC for handling rest requests, and MyBatis for data access. Application should be designed in the way that it should be easy to change the data access implementation (for example to hibernate or something else) and it has to be fast (so I am trying to avoid unnecessary overcomplication of design). Now my question is about the general design of layers. I would normally use DAO interface and then different implementations for different data access strategies, but MyBatis uses interfaces to access the data. So I can think of 2 possible models but I am not sure which one is better or if there is any other nice way: Controller layer - uses Service layer interfaces services are then implemented for each data access stretegy - for example for mybatis: service implementation uses Mapper classes to access data and do whatever it needs to do with them and sends them to controller layer Controller layer - uses Service layer - service layer uses DAO interfaces DAOs are then implemented for each data access strategy - for example for mybatis: DAO class uses mapper interface to access data and sends them to service layer, service layer then do whatever it needs to do with them and sends them to controller layer I prefer the first strategy as it seems to be less complicated, but then I would have to write all of the service code for another data access again. What do you think? Thank You

    Read the article

  • Is it better to hard code data or find an algorithm?

    - by OghmaOsiris
    I've been working on a boardgame that has a hex grid as the board (the upper right grid in the image below) Since the board will never change and the spaces on the board will always be linked to the same other spaces around it, should I just hard code every space with the values that I need? Or should I use various algorithms to calculate links and traversals? To be more specific, my board game is a 4 player game where each player has a 5x5x5x5x5x5 hex grid (again, the upper right grid in th eimage above). The object is to get from the bottom of the grid to the top, with various obstacles in the way, and each players being able to attack eachother from the edge of their grid onto other players based on a range multiplier. Since the players grid will never change and the distance of any arbitrary space from the edge of the grid will always be the same, should I just hard code this number into each of the spaces, or should I still use a breadth first search algorithm when players are attacking? The only con I can think of for hard coding everything is that I'm going to code 9+ 2(5+6+7+8) = 61 individual cells. Is there anything else that I'm missing that I should consider using more complex algorithms?

    Read the article

  • SQL SERVER – Concurrancy Problems and their Relationship with Isolation Level

    - by pinaldave
    Concurrency is simply put capability of the machine to support two or more transactions working with the same data at the same time. This usually comes up with data is being modified, as during the retrieval of the data this is not the issue. Most of the concurrency problems can be avoided by SQL Locks. There are four types of concurrency problems visible in the normal programming. 1)      Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update. 2)      Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over. Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist. 3)      Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements. Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value. 4)      Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over. Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted. When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future. Here is the quick mapping of the isolation level with concurrency problems: Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads Read Uncommitted Yes Yes Yes Yes Read Committed No Yes Yes Yes Repeatable Read No No No Yes Snapshot No No No No Serializable No No No No I hope this 400 word small article gives some quick understanding on concurrency issues and their relation to isolation level. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • I need a little help with .htaccess rewrite

    - by Pinokyo
    I need a little help with .htaccess file I have songs, singers and albums links I want to rewrite. I all ready rewrote the links and they are like this: the links for the songs is like this: /song/song_name for singers: /singer_name for albums: /album_name From my .htaccess file: RewriteEngine on RewriteRule ^singer/([^/\.]+)/?$ /core/controller.php?singer=$1 [L] RewriteRule ^song/([^/\.]+)/?$ /core/controller.php?song=$1 [L] RewriteRule ^album/([^/\.]+)/?$ /core/controller.php?album=$1 [L] I need the links for the songs, singers and albums to be like this: for songs /singer_name/song_name for singers /singer_name for albums /singer_name/album_name can anyone help me with this please.

    Read the article

  • How to handle lookup data in a C# ASP.Net MVC4 application?

    - by Jim
    I am writing an MVC4 application to track documents we have on file for our clients. I'm using code first, and have created models for my objects (Company, Document, etc...). I am now faced with the topic of document expiration. Business logic dictates certain documents will expire a set number of days past the document date. For example, Document A might expire in 180 days, Document 2 in 365 days, etc... I have a class for my documents as shown below (simplified for this example). What is the best way for me to create a lookup for expiration values? I want to specify documents of type DocumentA expire in 30 days, type DocumentB expire in 75 days, etc... I can think of a few ways to do this: Lookup table in the database I can query New property in my class (DaysValidFor) which has a custom getter that returns different values based on the DocumentType A method that takes in the document type and returns the number of days and I'm sure there are other ways I'm not even thinking of. My main concern is a) not violating any best practices and b) maintainability. Are there any pros/cons I need to be aware of for the above options, or is this a case of "just pick one and run with it"? One last thought, right now the number of days is a value that does not need to be stored anywhere on a per-document basis -- however, it is possible that business logic will change this (i.e., DocumentA's are 30 days expiration by default, but this DocumentA associated with Company XYZ will be 60 days because we like them). In that case, is a property in the Document class the best way to go, seeing as I need to add that field to the DB? namespace Models { // Types of documents to track public enum DocumentType { DocumentA, DocumentB, DocumentC // etc... } // Document model public class Document { public int DocumentID { get; set; } // Foreign key to companies public int CompanyID { get; set; } public DocumentType DocumentType { get; set; } // Helper to translate enum's value to an integer for DB storage [Column("DocumentType")] public int DocumentTypeInt { get { return (int)this.DocumentType; } set { this.DocumentType = (DocumentType)value; } } [DataType(DataType.Date)] [DisplayFormat(DataFormatString = "{0:MM-dd-yyyy}", ApplyFormatInEditMode = true)] public DateTime DocumentDate { get; set; } // Navigation properties public virtual Company Company { get; set; } } }

    Read the article

  • Resolving harmless binding errors in WPF II : 2 approaches for removing data binding errors due to heterogeneous types in a hierarchical view

    - by akjoshi
    This is a continuation post to my previous post Resolving harmless binding errors in WPF in which I talked about various ways of  resolving different binding errors etc. I recently came across another situation in which we get these binding errors and how they can be resolved. Problem: If you have a tree with 2 types of items in it and you use different DataTypes for each of them, then you will get binding errors because of missing Properties in either one of the item. In our case we had binding...(read more)

    Read the article

  • How to stop fan running always on Asus P8P76LE motherboard with ATI Radeon HD6900

    - by Chris Good
    I'm using Ubuntu 12.04 LTS. I'm not sure if it is the CPU (i7) fan or the video card fan. I've tried using lm-sensors & fancontrol sudo sensors-detect Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `w83627ehf': * ISA bus, address 0x290 Chip `Nuvoton NCT6776F Super IO Sensors' (confidence: 9) Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: # Chip drivers coretemp w83627ehf Like many people, I'm also getting error: /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed Here is the output of sensors: # sensors radeon-pci-0100 Adapter: PCI adapter temp1: +71.0°C coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 1: +40.0°C (high = +80.0°C, crit = +98.0°C) Core 2: +43.0°C (high = +80.0°C, crit = +98.0°C) Core 3: +42.0°C (high = +80.0°C, crit = +98.0°C) I'm hoping some-one has already solved this for my configuration because this seems to be a problem for many people and there are many different suggestions.

    Read the article

  • Migrating BizTalk 2006 R2 to BizTalk 2010 XLANGs Issue

    - by SURESH GIRIRAJAN
    When we migrate some BizTalk apps from BizTalk 2006 R2 to BizTalk 2010, and we ran into issue when a .net component called inside the orchestration. In the .net component we are trying to retrieve some promoted property and we also checked in the BizTalk group hub to validate it was promoted, no issues there.  Only when we try to access the data into the .net component we had issue. We just moved all the assembly what we had in BizTalk 2006 R2 to BizTalk 2010, didn’t recompile anything in BizTalk 2010 environment. But looking further there is couple of new namespace added to the Microsoft.XLANGs… in BizTalk 2010 compared to BizTalk 2006 R2 caused the issue. So all we did to fix the issue is recompile the project in 2010 environment and it worked fine. So it looks like some backward compatibility issue.  public static void Load(XLANGMessage msg) {  try  {      // get the process id from context.       object ctxVal = msg.GetPropertyValue(typeof(ProcessID)); … } BizTalk 2010: Error Message in the event viewer:  The service instance will remain suspended until administratively resumed or terminated. If resumed the instance will continue from its last persisted state and may re-throw the same unexpected exception. InstanceId: 441d73d3-2e84-49d2-b6bd-7218065b5e1d Shape name: Bulk Load ShapeId: bb959e56-9221-48be-a80f-24051196617d Exception thrown from: segment 1, progress 65 Inner exception: A property cannot be associated with the type 'Tellago.Common.Schemas.ProcessId'.   Exception type: InvalidPropertyTypeException Source: Microsoft.XLANGs.Engine Target Site: Microsoft.XLANGs.RuntimeTypes.MessagePropertyDefinition _getMessagePropertyDefinition(System.Type) The following is a stack trace that identifies the location where the exception occured   at Microsoft.XLANGs.Core.XMessage._getMessagePropertyDefinition(Type propType) at Microsoft.XLANGs.Core.XMessage.GetContentProperty(Type propType) at Microsoft.XLANGs.Core.XMessage.GetPropertyValue(Type propType) at Microsoft.BizTalk.XLANGs.BTXEngine.BTXMessage.GetPropertyValue(Type propType) at Microsoft.XLANGs.Core.MessageWrapperForUserCode.GetPropertyValue(Type propType) at Tellago.Common.Components.Load(XLANGMessage msg) at Tellago.SuspensionProcess.segment1(StopConditions stopOn) at Microsoft.XLANGs.Core.SegmentScheduler.RunASegment(Segment s, StopConditions stopCond, Exception& exp)

    Read the article

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • XNA 4: GetData from Texture2D and Set it into Texture3D with specific order

    - by cubrman
    I am trying to convert my color grading 2d lookup texture into 3d LUT. When I simply use: ColorAtlas.GetData(data); ColorAtlas3D.SetData(data); I get this: I tried building my 2d atlass horizontally but it did not helped - the data was messed up in a different way. So my question is how can I influence the order of the data I get from the 2d atlas and how can I properly pass it into my 3d atlas? Update: I know that I can GetData from a specific Rectangular area and put it into several arrays, but the result is still the same. This is what I tried: Color[] data2D = new Color[0]; for (int i = 0; i < 32; i++) { Color[] data = new Color[32 * 32]; GraphicsDevice.SetRenderTarget(null); ColorAtlas.GetData(0, new Rectangle(0, i*32, 32, 32), data, 0, data.Length); int oldLength = data2D.Length; Array.Resize<Color>(ref data2D, oldLength + data.Length); Array.Copy(data, 0, data2D, oldLength, data.Length); } ColorAtlas3D.SetData(data2D);

    Read the article

  • Developing a mobile application, how to show help if it contains too much data?

    - by MobileDev123
    I am developing a mobile application which has many functionality, and I am pretty sure that the design will confuse the user about how to use some functionality so we decided to include some help as we can see them regularly in desktop applications, but later we found that the help text is too long. We don't think that one screen is enough to describe what a user can do. Moreover the project itself is subjected to evolve based on beta stage and user reports. After a lot of thinking and meetings we have decided three options to show the users what they can do. Create the website or blog, so we can let the users know what they can do with this application, the advantage is that it can provide us a good source of marketing, but for that they have to access the site while most part of the application can be used while being offline in earlier versions. Create a section in the application called demos to show the same thing locally, but we are afraid that it will increase the size, that we think can be avoided (and we are planning to avoid if there is any option) Show popups, but we discarded this thinking that pop ups annoys user no matter what the platform is. I want to know from community that which option will you choose, we are also open to accept other ideas if you have.

    Read the article

  • benchmark tcp: ab or iperf like tool to send hex/binary/pcap data?

    - by olan
    Hello all, I have written a server in Twisted for a current project I'm working on, and now I need to test it. It receives TCP packets, with the payload consisting of just a serialised binary string. I want to be able to test the server for concurrency/throughput using the binary data as the payload, but can not find any tool that will allow me to do this. I tried iperf -F but it didn't work, as I think it was sending the binary/hex data as chars. I've also looked at ab which seems to be perfect - if only for http. As well as these, I've had a look at tcpreplay, but it doesn't perform any testing (or establish TCP connections) so it's not much use. Any help would be greatly appreciated as I'm rather stuck on this one!

    Read the article

  • Hello, can you just send me all your data please?

    - by fatherjack
    LiveJournal Tags: Security,SQL Server Our house phone rang on Saturday night and Mrs Fatherjack answered. I was in the other room but I heard her trying to explain to the caller that they were in some way mistaken. Eventually, as she got more irate with the caller, I went out and started to catch up with the events so far. The caller was trying to convince my wife that our computer was infected with a virus. She was confident that it wasn't. Her patience expired after almost 10 minutes...(read more)

    Read the article

  • Cannot get Virtualbox to install properly on Ubuntu 12.04

    - by lopac1029
    I cannot get Virtualbox to install properly on my 12.04. I first went with a manual install for the .deb from the old builds section of the Virtualbox page. That .deb opened up the Software Center and installed. Then I got the error coming up of VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot. which I can only assume was due to my Ubuntu version being 32-bit (System Details - Overview - OC type: 32-bit, right?) So I followed these instructions to remove the .deb manually, restarted my laptop, and then FOUND the actual Virtualbox install in the Software Center and installed from that (assuming it would give me the correct version I need for my system) So after all that (and then some), I'm still getting the same error when I connect to my new job's project in Virtualbox. Can anyone point me in the right direction of what to do here? This is the first time I've ever worked with Virtualbox, and no one at this company is using Ubuntu, so I'm on my own here. EDIT: Here is the direct info from running the 2 suggested commands Inspiron-1750-brick:~ $lscpu Architecture: i686 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 10 CPU MHz: 2100.000 BogoMIPS: 4189.45 L1d cache: 32K L1i cache: 32K L2 cache: 2048K Inspiron-1750-brick:~ $cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T6500 @ 2.10GHz stepping : 10 microcode : 0xa07 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4189.80 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T6500 @ 2.10GHz stepping : 10 microcode : 0xa07 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4189.45 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

    Read the article

  • What to call objects that may delete cached data to meet memory constraints?

    - by Brent
    I'm developing some cross-platform software which is intended to run on mobile devices. Both iOS and Android provide low memory warnings. I plan to make a wrapper class that will free cached resources (like textures) when low memory warnings are issued (assuming the resource is not in use). If the resource returns to use, it'll re-cache it, etc... I'm trying to think of what this is called. In .Net, it's similar to a "weak reference" but that only really makes sense when dealing with garbage collection, and since I'm using c++ and shared_ptr, a weak reference already has a meaning which is distinct from the one I'm thinking of. There's also the difference that this class will be able to rebuild the cache when needed. What is this pattern/whatever is called? Edit: Feel free to recommend tags for this question.

    Read the article

  • Ensuring that saved data has not been edited in a game with both offline and online components

    - by Omar Kooheji
    I'm in the pre-planning phase of coming up with a game design and I was wondering if there was a sensible way to stop people from editing saves in a game with offline and online components. The offline component would allow the player to play through the game and the online component would allow them to play against other players, so I would need to make sure that people hadn't edited the source code/save files while offline to gain an advantage while online. Game likely to be developed in either .Net or Java, both of which are unfortunately easy to decompile.

    Read the article

  • How to stop fan running always on Asus P8P67LE motherboard with ATI Radeon HD6900

    - by Chris Good
    I'm using Ubuntu 12.04 LTS. I'm not sure if it is the CPU (i7) fan or the video card fan. I've tried using lm-sensors & fancontrol sudo sensors-detect Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `w83627ehf': * ISA bus, address 0x290 Chip `Nuvoton NCT6776F Super IO Sensors' (confidence: 9) Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: # Chip drivers coretemp w83627ehf Like many people, I'm also getting error: /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed Here is the output of sensors: # sensors radeon-pci-0100 Adapter: PCI adapter temp1: +71.0°C coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 0: +44.0°C (high = +80.0°C, crit = +98.0°C) Core 1: +40.0°C (high = +80.0°C, crit = +98.0°C) Core 2: +43.0°C (high = +80.0°C, crit = +98.0°C) Core 3: +42.0°C (high = +80.0°C, crit = +98.0°C) I'm hoping some-one has already solved this for my configuration because this seems to be a problem for many people and there are many different suggestions.

    Read the article

  • What lasts longer: Data stored on non-volatile flash RAM, optical media, or magnetic disk?

    - by Chris W. Rea
    What lasts longer: Data stored on non-volatile flash RAM (USB stick or SD cards?), optical media (CD, DVD, or Blu-Ray?), or magnetic disk (floppies, hard drives?) My gut tells me optical media, but I'm not sure. Furthermore, which of those digital media would be most suitable for long-term data storage where environmental issues are unknown, such as low/high temperature or humidity? For example, what digital media could be stored in a basement, attic, or time capsule, and be expected to survive a reasonably long time? e.g. a lifetime, and then some. Update: Looks like optical media and magnetic tape each have one vote below. Does anybody else have an opinion or know of a study comparing the two?

    Read the article

  • How can I get a data usage/access log for an external hard-drive?

    - by Vittorio Vittori
    Hello, I'm working in an office with many people and sometimes I leave my external hard drive with my personal data inside. I would to know if there is some way to see if my hard disk was used during my absence. I'm not the computer administrator, so I can't use exclusive file permissions and I would really like to know hard disk is opened from another computer. I am using a Mac. Does exist some other way to protect personal data on usb device like an hard-drive? If yes can you write some link to possible guides? I hope there is some ploy!!

    Read the article

  • Outlook 2010 + Move IMAP PST file = Outlook data file cannot be accessed.

    - by GWB
    I set up a new IMAP account in Outlook 2010. It works but creates IMAP PST file in C:\Users\User\AppData\Local\Microsoft\Outlook. I want the file on my data drive in D:\Users\User\Documents\Outlook Files (the same folder where outlook automatically creates the local Outlook PST. I followed the instructions here to move the IMAP PST. Testing the account (send/receive) works fine, but if I try to manually send an email I get error 0x8004010F Outlook data file cannot be accessed. I've tried repairing the PST using SCANPST (it always finds errors), and deleting and recreating the account but I get the same error. If I move the PST file back, it works again, but this is not ideal. Note: I don't think this is a duplicate of this question as the cause is different and the solution does not help.

    Read the article

  • HP Envy 15 3040nr sound issues

    - by Lenny
    I have successfully set up my Envy 15" for Ubuntu 12.04 except for the sound, anyone have any ideas about how to get it working? I had to blacklist snd-usb-audio to my alsa to stop pulseaudio from freezing on login and logout, but now no sound plays. Could anyone provide me with some guidance? lspci info 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:01.2 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) 00:1c.7 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 8 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler [AMD Radeon HD 6600M Series] 09:00.0 Network controller: Intel Corporation Centrino Advanced-N 6230 (rev 34) 0a:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 10:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) 11:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04)

    Read the article

  • Is there a ways to see granular per-visit data in Google Analytics?

    - by jakub.g
    I've started using Google Analytics very recently and I'm a bit lost with the sea of options (have been using Sitemeter before for some time). I've clicked through the service a lot but couldn't find what I'm accustomed to. I can see multitude of aggregated statistics in GA like: charts of browser share lists of country share lists of most visited URLs within the page and so on, but I would actually like to analyze each of the visits themselves. Something like: User X, France, Chrome, 7 pageviews between 18:01 and 18:15, entered on a.htm and exited on b.htm User Y, UK, Firefox, 1 pageview at 18:20, entered on c.htm Is there an easy way to see the reports in this way (perhaps by clicking a link to a separate page to see that particular session's stats)? How to navigate there if so?

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • How likely can my data be recovered after Windows CHKDSK performed on a degraded RAID 5 array?

    - by chrisling106
    Hello there, We have a RAID 5 setup with 3 SATA disks, #2 went down as reported on the pre-POST screen. Unfortunately, for some reason out of my control, the system was rebooted with a degraded RAID :-O Windows XP (64-bit) loaded, CHKDSK ran automatically and done its recovery! From that point onwards, the following error prompts every time even in Safe Mode: lsass.exe - The endpoint format is invalid I took those 3 disks to the data recovery expert and need to wait at least 2-4 days for results. There are 2 VMs on multiple files stored in this RAID 5 array, and there's no backup! Sorry, I just inherited the system from an ex-staff who has left the company 2 months before I joined. How likely the data can be recovered?

    Read the article

  • ConsumeStructuredBuffer, what am I doing wrong?

    - by John
    I'm trying to implement the 3rd exercise in chapter 12 of Introduction to 3D Game Programming with DirectX 11, that is: Implement a Compute Shader to calculate the length of 64 vectors. Previous exercises ask you to do the same with typed buffers and regular structured buffers and I had no problems with them. For what I've read, [Consume|Append]StructuredBuffers are bound to the pipeline using UnorderedAccessViews (as long as they use the D3D11_BUFFER_UAV_FLAG_APPEND, and the buffers have both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_UNORDERED_ACCESS bind flags). Problem is: my AppendStructuredBuffer works, since I can append data to it and retrieve it from the application to write to a results file, but the ConsumeStructuredBuffer always returns zeroed data. Data is in the buffer, since if I change the UAV to a ShaderResourceView and to a StructuredBuffer in the HLSL side it works. I don't know what I am missing: Should I initialize the ConsumeStructuredBuffer on the GPU, or can I do it when I create the buffer (as I amb currently doing). Is it OK to bind the buffer with a UAV as described above? Do I need to bind it as a ShaderResourceView somehow? Maybe I am missing some step? This is the declaration of buffers in the Compute Shader: struct Data { float3 v; }; struct Result { float l; }; ConsumeStructuredBuffer<Data> gInput; AppendStructuredBuffer<Result> gOutput; And here the creation of the buffer and UAV for input data: D3D11_BUFFER_DESC inputDesc; inputDesc.Usage = D3D11_USAGE_DEFAULT; inputDesc.ByteWidth = sizeof(Data) * mNumElements; inputDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS; inputDesc.CPUAccessFlags = 0; inputDesc.StructureByteStride = sizeof(Data); inputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; D3D11_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &data[0]; HR(md3dDevice->CreateBuffer(&inputDesc, &vinitData, &mInputBuffer)); D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; uavDesc.Format = DXGI_FORMAT_UNKNOWN; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND; uavDesc.Buffer.NumElements = mNumElements; md3dDevice->CreateUnorderedAccessView(mInputBuffer, &uavDesc, &mInputUAV); Initial data is an array of Data structs, which contain a XMFLOAT3 with random data. I bind the UAV to the shader using the Effects framework: ID3DX11EffectUnorderedAccessViewVariable* Input = mFX->GetVariableByName("gInput")->AsUnorderedAccessView(); Input->SetUnorderedAccessView(uav); // uav is mInputUAV Any ideas? Thank you.

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >