Search Results

Search found 97400 results on 3896 pages for 'application data'.

Page 194/3896 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Cannot read from 2nd SATA data drive connected via SATA docking station

    - by Robbo
    Installed 10.10 this week on dual boot system. Everything else works fine but cannot read from 2nd SATA drive with all my data. Same drive works normally when booted to Windows XP. Interesting part is that I can see the drive in Ubuntu Disk Manager, can read all its attributes, can test it, shows up in Disk Manager, Storage Device Manager and Mount Manager, and can mount it, even change attributes; it appears healthy but does not show up in "Computer" or anywhere else that it can be accessed. The drive is connected via an external e-SATA docking station which is connected to a SATA port on the motherboard.

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • Tic-Tac-Toe game AI

    - by David Jones
    I'm looking into creating a simple tic tac toe/noughts and crosses game in Actionscript3 and am trying to understand the ideas behind the AI used in a game like this. I've seen some simplistic examples online but from what I've read a game tree or something like minimax is the best way to go about this. Can anyone help explain or reference any good examples of this? I've seen that there is a library called as3ds - data structures for game developers which has a number of classes that might help tie this together? Any info/examples or help is much appreciated.

    Read the article

  • Can't mount data DVD using 12.04

    - by dash2
    I cannot mount a data DVD using Ubuntu 12.04. I can read the DVD using a different OS, so the DVD is not the problem. I can read other CDs using Ubuntu. The error I get is: $ sudo mount /dev/dvd /cdrom mount: no medium found on /dev/sr0 The relevant output from lshw is: *-cdrom description: DVD reader product: DVD/CDRW UJDA775 vendor: MATSHITA physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/sr0 version: CB03 capabilities: removable audio cd-r cd-rw dvd configuration: ansiversion=5 status=nodisc I have installed ubuntu-restricted-extras and run /usr/share/doc/libdvdread4/install-css.sh.

    Read the article

  • Slow transfer to external USB3 hard drive

    - by JMP
    Trying to backup data from hard drive before reloading windows following some issue with its load. Having trouble with the file transfer to a USB3/2 external hard drive NTFS. Getting transfer speed of about 116.7kB/sec. In other words its taking about 5 hours to transfer 1.4GB. I've got about 80GB to go. So the transfer is going to take 11days. Seems a little on the slow side. Am I missing something? Is there a way to make this faster. No issue with the external drive transferring this amount in windows. But don't have that option at the moment.

    Read the article

  • Clear Linux file system after shutdown / start

    - by user35443
    I have very specific task. I need to clear the desktop, downloads, documents and so on after every shutdown or finish. For example, if anyone downloads something using Google Chrome, he will work with it and then he'll shutdown the computer for next use. And when second user sits for working on the computer, he'll find a clear file system without the data downloaded by the first user. On Windows, I used to work with Returnil Virtual System, but it doesn't have support for Linux. Can anybody tell me if is it possible and, if so, how? I was also thinking of using Wine for this program, but don't think it will be the best idea.

    Read the article

  • Fast Track Data Warehouse 3.0 Reference Guide

    - by jchang
    Microsoft just release Fast Track Data Warehouse 3.0 Reference Guide version. The new changes are increased memory recommendation and the disks per RAID group change from 2-disk RAID 1 to 4-Disk RAID 10. Memory The earlier FTDW reference architecture cited 4GB memory per core. There was no rational behind this, but it was felt some rule was better than no rule. The new FTDW RG correctly cites the rational that more memory helps keep hash join intermediate results and sort operations in memory. 4-Disk...(read more)

    Read the article

  • Evolution and thunderbird sharing same mail data?

    - by balki
    Hi, I have been using Evolution for a quite long and it has downloaded around 1.6GBs of mails from gmail. I want to try thunderbird but I dont want to re download everything again. Is that possible to have both clients sharing same data? I'll make sure I don't use both at the same time if that matters. I'll move to thunderbird fully if I'm happy with it. Problems I face with evolution is that I have to have the GUI running always if I want to get instant alerts and send mail immediately. Also it loads the messages slow and even after I move to the next mail, it slowly downloads all the linked images before moving on.

    Read the article

  • Problem with APTonCD application

    - by Harikrishnan
    I created a iso image using aptoncd & burned it to a dvd. now when i tried to restore, the program does not detect the dvd in the drive. It shows "Please insert a disc in the drive." and if we click "ok" it shows "E: Failed to mount the cdrom.". The dvd is in the drive itself. I tried "sudo lshw -C disk" and the out put is: *-cdrom description: DVD-RAM writer product: DVDRAM GH22NS50 vendor: HL-DT-ST physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd0 logical name: /dev/sr0 logical name: /media/APTonCD logical name: /media/apt version: TN02 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 mount.fstype=iso9660 mount.options=ro,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500 state=mounted status=ready *-medium physical id: 0 logical name: /dev/cdrom logical name: /media/APTonCD logical name: /media/apt configuration: mount.fstype=iso9660 mount.options=ro,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500 state=mounted Then i checked in disk utility application. in that dvd rom is shown as "/dvd/sr0". my ubuntu version is 10.10. please help me to solve the problem.

    Read the article

  • Increase the /home partition without losing the data

    - by sagarchalise
    I have a 320 GB harddrive with three partitions / , /home and swap. What I want to do is change the size of swap which now is 8 GB to 5 GB and append that 3 GB to my /home partition. I have searched through the web for this but don't seem to find a proper way to increase my home partition. Can anyone help ? By the way, I know how to decrease size of swap I just need the proper way to append that unallocated 3 GB of space to my /home partition without loosing the data. Thank You

    Read the article

  • Google analytics/adwords account and leaking of private data

    - by Satellite
    I am frequently asked to log into clients google analytics and adwords accounts. If I forget to log out before visiting other google properties (google search, youtube etc), this leaves tracks of my views/searches etc, exposing my activities to the client. Summary: Client gives me access to their Google Analytics / AdWords account I log into clients Analytics account and do some stuff Then in another tab I perform some related google searches to solve some related issues Issues solved, I then close the Analytics tab I then visit google.com, perform some unrelated searches I then visit YouTube, view some unrelated videos All Web and YouTube searches are recorded in clients google account, thus leaking potentially sensitive data Even assuming that I remember to log out correctly at step 4 (as I do 95% of the time), anything I do at step 3 is exposed to the client. I would be surprised if this is not a very common issue. I'm looking for a technical solution to ensure that this can never happen. Any ideas?

    Read the article

  • Crawling for geotagged data

    - by abe3
    I have no experience with web crawlers -- but I know that Apache maintains an open source web crawler called "Lucene." How would I go about writing such a crawler to search the web for geo tagged data close to a particular location? What would a general road map look like? How do I pick which slice of the web to crawl? Do I use regular expressions to find things that look like longitudes and latitudes? What does a general sketch of that solution look like?

    Read the article

  • How should I track approval workflow when users at every security level can create a request?

    - by Eric Belair
    I am writing a new application that allows users to enter requests. Once a request is entered, it must follow an approval workflow to be finally approved by a user the highest security level. So, let's say a user at Security Level 1 enters a request. This request must be approved by his superior - a user at Security Level 2. Once the Security Level 2 user approves it, it must be approved by a user at Security Level 3. Once the Security Level 3 user approves it, it is considered fully approved. However, users at any of the three Security Levels can enter requests. So, if a Security Level 3 user enters a request, it is automatically considered "fully approved". And, if a Security Level 2 user enters a request, it must only be approved by a Security Level 3 user. I'm currently storing each approval status in a Database Log Table, like so: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 1 USER_SUBMIT 2012-09-01 00:00:00.000 2 1 APPROVED_LEVEL2 2012-09-01 01:00:00.000 3 1 APPROVED_LEVEL3 2012-09-01 02:00:00.000 4 2 USER_SUBMIT 2012-09-01 02:30:00.000 5 2 APPROVED_LEVEL2 2012-09-01 02:45:00.000 My question is, which is a better design: Record all three statuses for every request ...or... Record only the statuses needed according to the Security Level of the user submitting the request In Case 2, the data might look like this for two requests - one submitted by Security Level 2 User and another submitted by Security Level 3 user: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 3 APPROVED_LEVEL2 2012-09-01 01:00:00.000 2 3 APPROVED_LEVEL3 2012-09-01 02:00:00.000 3 4 APPROVED_LEVEL3 2012-09-01 02:00:00.000

    Read the article

  • What are some good, simple examples for queues?

    - by Michael Ekstrand
    I'm teaching CS2 (Java and data structures), and am having some difficulty coming up with good examples to use when teaching queues. The two major applications I use them for are multithreaded message passing (but MT programming is out of scope for the course), and BFS-style algorithms (and I won't be covering graphs until later in the term). I also want to avoid contrived examples. Most things that I think of, if I were actually going to solve them in a single-threaded fashion I would just use a list rather than a queue. I tend to only use queues when processing and discovery are interleaved (e.g. search), or in other special cases like length-limited buffers (e.g. maintaining last N items). To the extent practical, I am trying to teach my students good ways to actually do things in real programs, not just toys to show off a feature. Any suggestions of good, simple algorithms or applications of queues that I can use as examples but that require a minimum of other prior knowledge?

    Read the article

  • Oracle Honors Hitachi Data Systems with 2012 Taleo Customer Innovation Award

    - by Scott Ewart
    High-Tech Leader Recognized at Taleo World for its Strategic Initiative Aligning Talent, Performance and Revenues Oracle awarded the 2012 Taleo Customer Innovation Award to    Hitachi Data Systems (HDS), a wholly owned subsidiary of Hitachi, Ltd., for transforming performance management within its global sales organization with Oracle Taleo talent management solutions. The Taleo Innovation Awards honor and recognize Oracle Taleo customers that advance talent management initiatives using innovation, leadership and best practices. Oracle honored HDS along with finalists National Heritage Academies and CACI at a ceremony held September 13 at Taleo World in Chicago. Josh Bersin, President and CEO of Bersin & Associates, was the emcee for the ceremony. The honorees were selected from dozens of global submissions by a panel of influential industry analysts with expertise in talent management. To view the full story and press release, click here.

    Read the article

  • Question about server usage, big community platform

    - by Json
    I’m working on a community platform writen in PHP, MySQL. I have some questions about the server usage maybe someone can help me out. The community is based on JQuery with many ajax requests to update content. It makes 5 - 10 AJAX(Json, GET, POST) requests every 5 seconds, the requests fetch user data like user notifications and messages by doing mySQL queries. I wonder how a server will handle this when there are for more than 5000 users online. Then it will be 50.000 requests every 5 seconds, what kind of server you need to handle this? Or maybe even more, when there are 15.000 users online, 150.000 requests every 5 seconds. My webserver have the following specs. Xeon Quad 2048MB 5000GB traffic Will it be good enough, and for how many users? Anyone can help me out or know where to find such information, like make a calculation?

    Read the article

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    - by Greg Low
    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects). To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn. The good news is that they’re back and you can get the latest version from here: http://www.microsoft.com/en-us/download/details.aspx?id=42313

    Read the article

  • What''s easy extensible technique to store game data?

    - by Miro
    I'm looking for library/technique for storing my game resources - levels, object (effects,world info), items(price,effects,...), NPC(visual info, behavior), everything except graphics/audio stuff. I've seen lua used for Awesome WM configuration. protobuf looks good, but it seems to be designed for network communication. I've tried to write my own parser, but as the project grows it's more and more harder to manage it and catch all the bugs. My requiremets: stability easy extension of data without need to convert older versions to newer good(don't have to be the best) performance of loading not much coding not XML!

    Read the article

  • ASP.net repeater control with SQLDataReader as data source

    - by PhilSando
    Here is the markup for the repeater control and its templates: <asp:Repeater ID="Repeater" runat="server"> <HeaderTemplate> <table> <tr> <td colspan="3"><h2>Header information:</h2></td> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td><%#Container.DataItem%></td> </tr> </ItemTemplate> <FooterTemplate> </table> </FooterTemplate> </asp:Repeater>  Here is the code to populate it with data:   SQLString = "select something from foo where something"             SQLCommand = New SqlCommand(SQLString, SQLConnection)             SQLConnection.Open()             SQLDReader = SQLCommand.ExecuteReader             If SQLDReader.HasRows Then                 Contactinforepeater.DataSource = SQLDReader                 Contactinforepeater.DataBind()             End If         End If         SQLConnection.Close()         SQLDReader.Close()

    Read the article

  • Splitting Logic, Data, Layout and "Hacks"

    - by fjdumont
    Sure, we all heard of programming patterns such as MVVM, MVC and such. But that isn't really what I'm looking into as Layout, Data and Logic is already pretty much split up (XML-Layout markup, Database, insert your language of choice here). The platform I am developing for is hard to maintain over the updated versions and older OSes. The project significantly grew up over the last few months and dealing with different platform versions really is a pain. For example simply disabling an user interface control for all existing versions took me around 40 lines of code in the logic layer, wrangling around with invocation, delegation, singletons that provide UI handling and so on. Is there a clean way to keep track of those "hacks" by maybe excluding it into separate classes or even packages? Should I overwrite existing framework code in order to handle my requirements correctly? If so, does that concept have a name?

    Read the article

  • Error while installing an application

    - by Bong.Da.City
    So i have installed synaptic package manager.. via it, i have checked once libopencv-highgui-dev and applied complete removal.. after that i installed it... now everytime i try to install an application e.g Format Junkie sudo add-apt-repository ppa:format-junkie-team/release && sudo apt-get update && sudo apt-get install formatjunkie in the command install format junkie it gives me that error everytime: sudo apt-get install formatjunkie Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libopencv-features2d-dev : Depends: libopencv-highgui-dev (= 2.3.1-11ubuntu2) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. What should i do? And 2nd what did i did wrong so it won't happen another time? output of lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.10 Release: 12.10 Codename: quantal

    Read the article

  • Displaying Datamatrix in application error screen

    - by DaveNay
    Quite often we will get a report from a user in the field saying there was an error in our application. Frequently this leads to the typical round of "What was the error?" "I don't know, it was just an error." We of course log these faults to the log files, and we can even enable detailed debug logs, but this involves the end user changing a setting in the configuration file and then finding the correct files and then emailing them to us. As I'm sure you can all imagine, there are plenty of pitfalls and alligators in this methodology. Recently a couple of people have used their cell phone to email me a "screen capture" of the fault, and while this helps, we still have to scrutinize the image to find the exact fault, and if enabled, the stack trace. So this evening, I had the brilliant idea (IMHO) to encode the fault into a Datamatrix barcode image and then encourage users to send me a picture from their cell phone. I can then decode the datamatrix and get a parse-able error message! Our core technology is machine vision, so the decoding of the datamatrix image would be trivial, I just need to find a method of generating the actual image to display in the fault handler. Thoughts?

    Read the article

  • EAV - is it really bad in all scenarios?

    - by Giedrius
    I'm thinking to use EAV for some of the stuff in one of the projects, but all questions about it in stackoverflow end up to answers calling EAV an anti pattern. But I'm wondering, if is it that wrong in all cases? Let's say shop product entity, it has common features, like name, description, image, price, etc., that take part in logic many places and has (semi)unique features, like watch and beach ball would be described by completely different aspects. So I think EAV would fit for storing those (semi)unique features? All this is assuming, that for showing product list, it is enough info in product table (that means no EAV is involved) and just when showing one product/comparing up to 5 products/etc. data saved using EAV is used. I've seen such approach in Magento commerce and it is quite popular, so may be there are cases, when EAV is reasonable?

    Read the article

  • The Emergence of a New Architecture for Long-term Data Retention

    - by Claudia Caramelli-Oracle
    Dear Partner, A new research report from Wikibon explains how the combination of flash and tape makes for a superior solution for long-term data archives versus using dedupe appliances. The combination of these two technologies, that have been in the market, one for a few years and the other for decades, introduces a new concept. The concept is “Flape”, a concept first coined by Wikibon in October of 2012. Flape is a combination of Flash (SSD) technology and tape…this combination of technologies when used for long-term archiving can save IT departments as much as 300% of their overall IT budget over the course of 10 years. Do you want to know more? You can review the whole report here.

    Read the article

  • Run external application on markdown source in ikiwiki

    - by student
    Can I add a button to each wiki page in ikiwiki which launches an external application (on the client side) or script with the markdown code of the current page as input? Edit: I didn't realize that it might be complicated to do it on client side as Zenklys' answer suggested. So perhaps I should describe more concretely what I have in mind: I want to have two buttons: "Get LaTeX" and 2. "Get pdf". Clicking on "Get LaTeX should generate a LaTeX file and the browser should simply open or download that file. Analogously for the pdf. It would even be ok, to have a button "Generate LaTeX" instead, which generates the LaTeX code and changes after the generation to "Get LaTeX" which simply points to the LaTeX file. So it is not really necessary to do the generation of the files on client side. Would be ok, if this is done (on a temporary folder) on server side. For the LaTeX resp. pdf generation I want to use a custom wrapper script for pandoc, let's call it mymarkdown2latex resp. mymarkdown2pdf.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >