Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 492/2640 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Change Drobo directory permissions

    - by Steven Scott
    I have a Drobo unit that is connected via fire-wire to a Dell notebook. Using Ubuntu 12.04. I can not seem to change the permissions to allow all users to have read/write access to the drives. The unit is automatically mounting the volumes as my user using the system, so other applications can not access the device. I want to set up a Plex Media Server to stream music, etc... but it will not scan the drives since it can not access them. How can I change the permissions to allow everyone to read the volumes? IF I add them to the fstab as ntfs volumes, Ubuntu reports that they are not available during the boot up, likely due to the fire-wire not having found the drives yet. Any ideas or suggestions would be greatly appreciated.

    Read the article

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • zip being too nice (Mac OS X)

    - by stib
    I use zip to do a regular backup of a local directory onto a remote machine. They don't believe in things like rsync here, so it's the best I can do (?). Here's the script I use echo $(date)>>~/backuplog.txt; if [[ -e /Volumes/backup/ ]]; then cd /Volumes/Non-RAID_Storage/; for file in projects/*; do nice -n 10 zip -vru9 /Volumes/backup/nonRaidStorage.backup.zip "$file" 2>&1 | grep -v "zip info: local extra (21 bytes)">>~/backuplog.txt; done; else echo "backup volume not mounted">>~/backuplog.txt; fi This all works fine, except that zip never uses much CPU, so it seems to be taking longer than it should. It never seems to get above 5%. I tried making it nice -20 but that didn't make any difference. Is it just the network or disc speeds bottlenecking the process or am I doing something wrong?

    Read the article

  • XNA 4: GetData from Texture2D and Set it into Texture3D with specific order

    - by cubrman
    I am trying to convert my color grading 2d lookup texture into 3d LUT. When I simply use: ColorAtlas.GetData(data); ColorAtlas3D.SetData(data); I get this: I tried building my 2d atlass horizontally but it did not helped - the data was messed up in a different way. So my question is how can I influence the order of the data I get from the 2d atlas and how can I properly pass it into my 3d atlas? Update: I know that I can GetData from a specific Rectangular area and put it into several arrays, but the result is still the same. This is what I tried: Color[] data2D = new Color[0]; for (int i = 0; i < 32; i++) { Color[] data = new Color[32 * 32]; GraphicsDevice.SetRenderTarget(null); ColorAtlas.GetData(0, new Rectangle(0, i*32, 32, 32), data, 0, data.Length); int oldLength = data2D.Length; Array.Resize<Color>(ref data2D, oldLength + data.Length); Array.Copy(data, 0, data2D, oldLength, data.Length); } ColorAtlas3D.SetData(data2D);

    Read the article

  • Developing a mobile application, how to show help if it contains too much data?

    - by MobileDev123
    I am developing a mobile application which has many functionality, and I am pretty sure that the design will confuse the user about how to use some functionality so we decided to include some help as we can see them regularly in desktop applications, but later we found that the help text is too long. We don't think that one screen is enough to describe what a user can do. Moreover the project itself is subjected to evolve based on beta stage and user reports. After a lot of thinking and meetings we have decided three options to show the users what they can do. Create the website or blog, so we can let the users know what they can do with this application, the advantage is that it can provide us a good source of marketing, but for that they have to access the site while most part of the application can be used while being offline in earlier versions. Create a section in the application called demos to show the same thing locally, but we are afraid that it will increase the size, that we think can be avoided (and we are planning to avoid if there is any option) Show popups, but we discarded this thinking that pop ups annoys user no matter what the platform is. I want to know from community that which option will you choose, we are also open to accept other ideas if you have.

    Read the article

  • benchmark tcp: ab or iperf like tool to send hex/binary/pcap data?

    - by olan
    Hello all, I have written a server in Twisted for a current project I'm working on, and now I need to test it. It receives TCP packets, with the payload consisting of just a serialised binary string. I want to be able to test the server for concurrency/throughput using the binary data as the payload, but can not find any tool that will allow me to do this. I tried iperf -F but it didn't work, as I think it was sending the binary/hex data as chars. I've also looked at ab which seems to be perfect - if only for http. As well as these, I've had a look at tcpreplay, but it doesn't perform any testing (or establish TCP connections) so it's not much use. Any help would be greatly appreciated as I'm rather stuck on this one!

    Read the article

  • Hello, can you just send me all your data please?

    - by fatherjack
    LiveJournal Tags: Security,SQL Server Our house phone rang on Saturday night and Mrs Fatherjack answered. I was in the other room but I heard her trying to explain to the caller that they were in some way mistaken. Eventually, as she got more irate with the caller, I went out and started to catch up with the events so far. The caller was trying to convince my wife that our computer was infected with a virus. She was confident that it wasn't. Her patience expired after almost 10 minutes...(read more)

    Read the article

  • What to call objects that may delete cached data to meet memory constraints?

    - by Brent
    I'm developing some cross-platform software which is intended to run on mobile devices. Both iOS and Android provide low memory warnings. I plan to make a wrapper class that will free cached resources (like textures) when low memory warnings are issued (assuming the resource is not in use). If the resource returns to use, it'll re-cache it, etc... I'm trying to think of what this is called. In .Net, it's similar to a "weak reference" but that only really makes sense when dealing with garbage collection, and since I'm using c++ and shared_ptr, a weak reference already has a meaning which is distinct from the one I'm thinking of. There's also the difference that this class will be able to rebuild the cache when needed. What is this pattern/whatever is called? Edit: Feel free to recommend tags for this question.

    Read the article

  • Why can't add a hot spare in freebsd? Can anybody help me fix it?

    - by hamlet
    Why can't add a hot spare? Can anybody help me fix it? mfiutil add e1:s1 mfid0 mfiutil: Drive 1 is not available My mfi status:: mfiutil show config mfi0 Configuration: 1 arrays, 1 volumes, 0 spares array 0 of 2 drives: drive 0 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWHSB4C> SAS enclosure 1, slot 0 drive 1 ( 137G) ONLINE <HITACHI HUS153014VLS300 A410 serial=JFWJ3AEC> SAS enclosure 1, slot 1 volume mfid0 (136G) RAID-1 64K OPTIMAL spans: array 0 mfiutil show events 1468 (boot + 25s/BATTERY/WARN) - Battery removed 1475 (boot + 52s/DRIVE/WARN) - PD 00(e1/s0) is not a certified drive 1478 (boot + 52s/DRIVE/WARN) - PD 01(e1/s1) is not a certified drive 1480 (boot + 64s/BATTERY/WARN) - BBU disabled; changing WB virtual disks to WT mfiutil show volumes mfi0 Volumes: Id Size Level Stripe State Cache Name mfid0 ( 136G) RAID-1 64K OPTIMAL Disabled

    Read the article

  • Ensuring that saved data has not been edited in a game with both offline and online components

    - by Omar Kooheji
    I'm in the pre-planning phase of coming up with a game design and I was wondering if there was a sensible way to stop people from editing saves in a game with offline and online components. The offline component would allow the player to play through the game and the online component would allow them to play against other players, so I would need to make sure that people hadn't edited the source code/save files while offline to gain an advantage while online. Game likely to be developed in either .Net or Java, both of which are unfortunately easy to decompile.

    Read the article

  • What lasts longer: Data stored on non-volatile flash RAM, optical media, or magnetic disk?

    - by Chris W. Rea
    What lasts longer: Data stored on non-volatile flash RAM (USB stick or SD cards?), optical media (CD, DVD, or Blu-Ray?), or magnetic disk (floppies, hard drives?) My gut tells me optical media, but I'm not sure. Furthermore, which of those digital media would be most suitable for long-term data storage where environmental issues are unknown, such as low/high temperature or humidity? For example, what digital media could be stored in a basement, attic, or time capsule, and be expected to survive a reasonably long time? e.g. a lifetime, and then some. Update: Looks like optical media and magnetic tape each have one vote below. Does anybody else have an opinion or know of a study comparing the two?

    Read the article

  • How can I get a data usage/access log for an external hard-drive?

    - by Vittorio Vittori
    Hello, I'm working in an office with many people and sometimes I leave my external hard drive with my personal data inside. I would to know if there is some way to see if my hard disk was used during my absence. I'm not the computer administrator, so I can't use exclusive file permissions and I would really like to know hard disk is opened from another computer. I am using a Mac. Does exist some other way to protect personal data on usb device like an hard-drive? If yes can you write some link to possible guides? I hope there is some ploy!!

    Read the article

  • Outlook 2010 + Move IMAP PST file = Outlook data file cannot be accessed.

    - by GWB
    I set up a new IMAP account in Outlook 2010. It works but creates IMAP PST file in C:\Users\User\AppData\Local\Microsoft\Outlook. I want the file on my data drive in D:\Users\User\Documents\Outlook Files (the same folder where outlook automatically creates the local Outlook PST. I followed the instructions here to move the IMAP PST. Testing the account (send/receive) works fine, but if I try to manually send an email I get error 0x8004010F Outlook data file cannot be accessed. I've tried repairing the PST using SCANPST (it always finds errors), and deleting and recreating the account but I get the same error. If I move the PST file back, it works again, but this is not ideal. Note: I don't think this is a duplicate of this question as the cause is different and the solution does not help.

    Read the article

  • Is it possible to convert striped logical volume to linear logical volume?

    - by JooMing
    I've a logical volume that is striped across three physical volumes. I had to move this logical volume to another physical volume. This worked nicely with pvmove command. However, I discovered later that the logical volume is still striped and now all three stripes are on the same physical volume. Is there any way to convert striped logical volumes to linear logical volumes? I'm using LVM2 on linux. I figured that the obvious possibility is to rename the striped logical volume, create a new linear logical volume, and then copy data over, but that requires taking the filesystem system offline for some time. Unfortunately, I can't do that before the next week. Is there any better alternative?

    Read the article

  • Is there a ways to see granular per-visit data in Google Analytics?

    - by jakub.g
    I've started using Google Analytics very recently and I'm a bit lost with the sea of options (have been using Sitemeter before for some time). I've clicked through the service a lot but couldn't find what I'm accustomed to. I can see multitude of aggregated statistics in GA like: charts of browser share lists of country share lists of most visited URLs within the page and so on, but I would actually like to analyze each of the visits themselves. Something like: User X, France, Chrome, 7 pageviews between 18:01 and 18:15, entered on a.htm and exited on b.htm User Y, UK, Firefox, 1 pageview at 18:20, entered on c.htm Is there an easy way to see the reports in this way (perhaps by clicking a link to a separate page to see that particular session's stats)? How to navigate there if so?

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Convenient practice for where to place images?

    - by Baumr
    A lot of developers place all image files inside a central directory, for example: /i/img/ /images/ /img/ Isn't it better (e.g. content architecture, on-page SEO, code maintainability, filename maintainability, etc.) to place them inside the relevant directories in which they are used? For example: example.com/logo.jpg example.com/about/photo-of-me.jpg example.com/contact/map.png example.com/products/category1-square.png example.com/products/category2-square.png example.com/products/category1/product1-thumb.jpg example.com/products/category1/product2-thumb.jpg example.com/products/category1/product1/product1-large.jpg example.com/products/category1/product1/product2-large.jpg example.com/products/category1/product1/product3-large.jpg What is the best practice here regarding all possible considerations (for static non-CMS websites)? N.B. The names product1-large and product1-thumb are just examples in this context to illustrate what kind of images they are. It is advised to use descriptive filenames for SEO benefit.

    Read the article

  • Disk space mismatch on OS X Server (Leopard)

    - by John Gardeniers
    My Nagios system sent me an alert to inform me that the disk space on one of the drives on our OS X server is very low. When I run df /Volumes/Apps/ I get /dev/disk0s3 117209520 114932472 2277048 99% /Volumes/Apps When I run du -c /Volumes/Apps it reports 11489944 total Why might there be such a vast difference? Even more importantly, how do I find the problem and what can I do about it? I'm essentially just a Windows admin, so am well out of my comfort zone here. I use a Mac but I'm not a Mac admin in any real sense of the word.

    Read the article

  • How likely can my data be recovered after Windows CHKDSK performed on a degraded RAID 5 array?

    - by chrisling106
    Hello there, We have a RAID 5 setup with 3 SATA disks, #2 went down as reported on the pre-POST screen. Unfortunately, for some reason out of my control, the system was rebooted with a degraded RAID :-O Windows XP (64-bit) loaded, CHKDSK ran automatically and done its recovery! From that point onwards, the following error prompts every time even in Safe Mode: lsass.exe - The endpoint format is invalid I took those 3 disks to the data recovery expert and need to wait at least 2-4 days for results. There are 2 VMs on multiple files stored in this RAID 5 array, and there's no backup! Sorry, I just inherited the system from an ex-staff who has left the company 2 months before I joined. How likely the data can be recovered?

    Read the article

  • ConsumeStructuredBuffer, what am I doing wrong?

    - by John
    I'm trying to implement the 3rd exercise in chapter 12 of Introduction to 3D Game Programming with DirectX 11, that is: Implement a Compute Shader to calculate the length of 64 vectors. Previous exercises ask you to do the same with typed buffers and regular structured buffers and I had no problems with them. For what I've read, [Consume|Append]StructuredBuffers are bound to the pipeline using UnorderedAccessViews (as long as they use the D3D11_BUFFER_UAV_FLAG_APPEND, and the buffers have both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_UNORDERED_ACCESS bind flags). Problem is: my AppendStructuredBuffer works, since I can append data to it and retrieve it from the application to write to a results file, but the ConsumeStructuredBuffer always returns zeroed data. Data is in the buffer, since if I change the UAV to a ShaderResourceView and to a StructuredBuffer in the HLSL side it works. I don't know what I am missing: Should I initialize the ConsumeStructuredBuffer on the GPU, or can I do it when I create the buffer (as I amb currently doing). Is it OK to bind the buffer with a UAV as described above? Do I need to bind it as a ShaderResourceView somehow? Maybe I am missing some step? This is the declaration of buffers in the Compute Shader: struct Data { float3 v; }; struct Result { float l; }; ConsumeStructuredBuffer<Data> gInput; AppendStructuredBuffer<Result> gOutput; And here the creation of the buffer and UAV for input data: D3D11_BUFFER_DESC inputDesc; inputDesc.Usage = D3D11_USAGE_DEFAULT; inputDesc.ByteWidth = sizeof(Data) * mNumElements; inputDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS; inputDesc.CPUAccessFlags = 0; inputDesc.StructureByteStride = sizeof(Data); inputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; D3D11_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &data[0]; HR(md3dDevice->CreateBuffer(&inputDesc, &vinitData, &mInputBuffer)); D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; uavDesc.Format = DXGI_FORMAT_UNKNOWN; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND; uavDesc.Buffer.NumElements = mNumElements; md3dDevice->CreateUnorderedAccessView(mInputBuffer, &uavDesc, &mInputUAV); Initial data is an array of Data structs, which contain a XMFLOAT3 with random data. I bind the UAV to the shader using the Effects framework: ID3DX11EffectUnorderedAccessViewVariable* Input = mFX->GetVariableByName("gInput")->AsUnorderedAccessView(); Input->SetUnorderedAccessView(uav); // uav is mInputUAV Any ideas? Thank you.

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • Sanity checks vs file sizes

    - by Richard Fabian
    In your game assets do you make room for explicit sanity checks, or do you have some generally expected bounds which you assert? I've been thinking about how we compress data and thought that it's much better to have the former, and less of the latter. If your data can exceed your normal valid ranges, but if it does it's an error, then surely that implies you're not compressing the data well enough? What do you do to find out if your data is compressed as far as it can be, and what do you use to ensure your data isn't corrupted and ensure it's an official release? EDIT I'm not interested in sanity checking the file size, but instead, how you manage your sanity checks and whether you arrange the excess size caused by the opportunity to do sanity checks by using explicit extra data, or through allowing the data enough file space (data member size) to be out of valid range and thus able to be checked merely by looking at the asset in memory after loading.

    Read the article

  • How does one handle sensitive data when using Github and Heroku?

    - by Jonas
    I am not yet accustomed with the way Git works (And wonder if someone besides Linus is ;)). If you use Heroku to host you application, you need to have your code checked in a Git repo. If you work on an open-source project, you are more likely going to share this repo on Github or other Git hosts. Some things should not be checked in the public repo; database passwords, API keys, certificates, etc... But these things still need to be part of the Git repo since you use it to push your code to Heroku. How to work with this use case? Note: I know that Heroku or PHPFog can use server variables to circumvent this problem. My question is more about how to "hide" parts of the code.

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >