Search Results

Search found 58436 results on 2338 pages for 'data'.

Page 411/2338 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Adding a ppa repo and get key signed - no valid OpenPGP data - proxy issue?

    - by groovehunter
    I want to get a ppa key signed I tried apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A258828C Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys A258828C gpg: requesting key A258828C from hkp server keyserver.ubuntu.com gpgkeys: HTTP fetch error 7: couldn't connect to host gpg: no valid OpenPGP data found. gpg: Total number processed: 0 and wget -q http://ppa.launchpad.net/panda3d/ppa/ubuntu/dists/lucid/Release.gpg -O- | apt-key add - gpg: no valid OpenPGP data found I am behind a proxy , in apt.conf it is configured correctly Acquire::http::Proxy "http://proxy.mycompany.de:3128"; I also tried setting proxy export http_proxy="proxy.mycompany.de:3128" export https_proxy="proxy.mycompany.de:3128"

    Read the article

  • Recorded YouTube-like presentation and "live" demos of Oracle Advanced Analytics

    - by chberger
    Ever want to just sit and watch a YouTube-like presentation and "live" demos of Oracle Advanced Analytics?  Then ' target=""click here! This 1+ hour long session focuses primarily on the Oracle Data Mining component of the Oracle Advanced Analytics Option and is tied to the Oracle SQL Developer Days virtual and onsite events.   I cover: Big Data + Big Data Analytics Competing on analytics & value proposition What is data mining? Typical use cases Oracle Data Mining high performance in-database SQL based data mining functions Exadata "smart scan" scoring Oracle Data Miner GUI (an Extension that ships with SQL Developer) Oracle Business Intelligence EE + Oracle Data Mining resutls/predictions in dashboards Applications "powered by Oracle Data Mining for factory installed predictive analytics methodologies Oracle R Enterprise Please contact [email protected] should you have any questions.  Hope you enjoy!  Charlie Berger, Sr. Director of Product Management, Oracle Data Mining & Advanced Analytics, Oracle Corporation

    Read the article

  • Adding a ppa repo and get key signed - no valid OpenPGP data - proxy issue?

    - by groovehunter
    I want to get a ppa key signed I tried apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A258828C Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys A258828C gpg: requesting key A258828C from hkp server keyserver.ubuntu.com gpgkeys: HTTP fetch error 7: couldn't connect to host gpg: no valid OpenPGP data found. gpg: Total number processed: 0 and wget -q http://ppa.launchpad.net/panda3d/ppa/ubuntu/dists/lucid/Release.gpg -O- | apt-key add - gpg: no valid OpenPGP data found I am behind a proxy , in apt.conf it is configured correctly Acquire::http::Proxy "http://proxy.mycompany.de:3128"; I also tried setting proxy export http_proxy="proxy.mycompany.de:3128" export https_proxy="proxy.mycompany.de:3128"

    Read the article

  • Sql Server Data Tools & Entity Framework - is there any synergy here?

    - by Benjol
    Coming out of a project using Linq2Sql, I suspect that the next (bigger) one might push me into the arms of Entity Framework. I've done some reading-up on the subject, but what I haven't managed to find is a coherent story about how SQL Server Data Tools and Entity Framework should/could/might be used together. Were they conceived totally separately, and using them together is stroking the wrong way? Are they somehow totally orthogonal and I'm missing the point? Some reasons why I think I might want both: SSDT is great for having 'compiled' (checked) and easily versionable sql and schema But the SSDT 'migration/update' story is not convincing (to me): "Update anything" works ok for schema, but there's no way (AFAIK) that it can ever work for data. On the other hand, I haven't tried the EF migration to know if it presents similar problems, but the Up/Down bits look quite handy.

    Read the article

  • What is the relationship between the business logic layer and the data access layer?

    - by Matt Fenwick
    I'm working on an MVC-ish app (I'm not very experienced with MVC, hence the "-ish"). My model and data access layer are hard to test because they're very tightly coupled, so I'm trying to uncouple them. What is the nature of the relationship between them? Should just the model know about the DAL? Should just the DAL know about the model? Or should both the model and the DAL be listeners of the other? In my specific case, it's: a web application the model is client-side (javascript) the data is accessed from the back-end using Ajax persistence/back-end is currently PHP/MySQL, but may have to switch to Python/GoogleDataStore on the GAE

    Read the article

  • Windows 7: Search indexing is stuck

    - by Ricket
    When I open Indexing Options, it says: 4,317 items indexed Indexing in progress. Search results might not be complete during this time. It's stuck at 4,317 though; no more items have been indexed. Worst of all, SearchIndexer.exe is taking up 100% CPU (well, 50%, but I have a dual core CPU; it's taking up all processing power it can). It is not causing hard drive activity though. I tried clicking "Troubleshoot search and indexing" at the bottom of the Indexing Options window, but it couldn't find any problem. I've also tried the repair registry key that several websites suggest; I change HKLM\SOFTWARE\Microsoft\Windows Search SetupCompletedSuccessfully to 0 and restarted the computer, and it apparently repaired because it flipped back to 1, but the same problem continues to occur. It's reducing the battery life of my laptop and making it really hot so that my fans are running all the time. I've had to disable the Windows Search service. How can I fix this? Do I need to just flat-out reformat my computer? Update: I've tried rebuilding a couple times. There's nothing unusual about the locations I have to index, and I don't have any downloads in progress or anything like that. I don't see any reason why it stopped, and I noticed it much too late to do a system restore. At this point, I'm hoping someone will offer up some secret answer that will fix the problem, thus the bounty. Another update: I tried starting the service again, just to let it try yet again. It seemed okay at first (Indexing Options showed it operating at reduced speed due to user activity, and the number of files was going up). A while later I checked, and the service had stopped. Event viewer revealed some errors like this: Log Name: Application Source: Application Error Date: 2/1/2010 7:34:23 PM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ricky-win7 Description: Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: NLSData0007.dll, version: 6.1.7600.16385, time stamp: 0x4a5bda88 Exception code: 0xc0000005 Fault offset: 0x002141ba Faulting process id: 0x13a0 Faulting application start time: 0x01caa39f2a70ec02 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\System32\NLSData0007.dll Report Id: b4f7a7ae-0f92-11df-87fc-e5d65d8794c2 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-02-02T00:34:23.000000000Z" /> <EventRecordID>10689</EventRecordID> <Channel>Application</Channel> <Computer>ricky-win7</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7600.16385</Data> <Data>4a5bcdd0</Data> <Data>NLSData0007.dll</Data> <Data>6.1.7600.16385</Data> <Data>4a5bda88</Data> <Data>c0000005</Data> <Data>002141ba</Data> <Data>13a0</Data> <Data>01caa39f2a70ec02</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\System32\NLSData0007.dll</Data> <Data>b4f7a7ae-0f92-11df-87fc-e5d65d8794c2</Data> </EventData> </Event> If you are having the same error and arrived here from a Google search, please comment or add an answer detailing your progress on this, if any...

    Read the article

  • Apple prépare-t-il un Mac OS version Cloud ? L'entreprise obtient un nouveau brevet et préparerait un data-center à 1 milliards de dollars

    Apple prépare-t-il un Mac OS version Cloud ? L'entreprise obtient un nouveau brevet et préparerait un data-center à 1 milliards de dollars Apple vient d'obtenir un nouveau brevet relatif à un système d'exploitation "net-booté" et serait, par ailleurs, en train de préparer un gigantesque data-center dans une zone rurale de la Caroline du Nord. Un datacenter qui devrait coûter pas moins d'un milliard de dollars. La demande du dépôt du brevet en question avait été déposée en 2006. Le brevet est décrit comme "la fourniture d'un système d'exploitation fiable et maintenable dans un environnement démarré-sur-réseau (net-booté)". La disponibilité des dét...

    Read the article

  • Why do Core Data sqlite table columns start with 'Z'?

    - by Dia
    I was looking at the sqlite table that Core Data generates and noticed that all table columns start with 'Z'. I realize this is an implementation detail, but I was curious as to why that's the case and if there was a design decision involved in this. Anyone happen to know or guess why? Here's a sample schema output of Core Data sqlite database: sqlite .schema CREATE TABLE ZPOST ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZPOSTID INTEGER, ZUSER INTEGER, ZCREATEDAT TIMESTAMP, ZTEXT VARCHAR ); CREATE TABLE ZUSER ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZUSERID INTEGER, ZAVATARIMAGEURLSTRING VARCHAR, ZUSERNAME VARCHAR ); CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), Z_PLIST BLOB); CREATE TABLE Z_PRIMARYKEY (Z_ENT INTEGER PRIMARY KEY, Z_NAME VARCHAR, Z_SUPER INTEGER, Z_MAX INTEGER);

    Read the article

  • Store VOD wmi data in a database directly or use CQRS?

    - by JD01
    I need to collect Video on demand bandwidth usage every few minutes (or maybe ever few seconds) and store this in a database so users can produce graphs on bandwidth usage over a period of time (few hours, days, weeks or even possibly months). So the sort of data that will be stored will be the number of users watching videos, current server bandwidth (Mb/s), multicast bit rate etc. I am wondering whether using CQRS would be a good approach with Event sourcing as I can then rebuild my objects to create different projections (I.e. different graphs/reports etc) but then again it seems like I am introducing complexity which might not be needed. Or would it be best to just put the data directly in a database (currently using PostGres) directly and query off that? Having thought about it, my table is a form of audit log anyway, so I don't think I need event sourcing at all. Any thoughts?

    Read the article

  • How do you choose a programming/data structure/algorithm book?

    - by Fanatic23
    I really should not be mentioning the name of the book, but the first time I read it (during my under-grad days) I almost concluded that data structure was a bad course to pick. Which brings me to the question I am asking here. What makes a programming or data structure or algorithm book tick? Clearly, lucid explanation is one. But I also realize that organization of the material is very important and so is diagrams. What else? Some pointers would obviously help when I hang out in my neighborhood computer book shop the next time.

    Read the article

  • How to get data from a bluetooth device that is not visible?

    - by jonobacon
    I just bought a Fitbit One which includes Bluetooth 4.0 to sync with mobile devices. Currently libfitbit does not include support for bluetooth syncing, so I would like to see how much data I can get out of the device that I can pass onto the libfitbit devs so that they can explore bluetooth support. I ran: hcitool scan which unfortunately did not return any devices. I also used blueman to scan for devices and nothing was found either. Therefore I am assuming that the bluetooth radio in the device is not visible by default. Can anyone recommend any ways to get data out of the device that could be helpful?

    Read the article

  • Dashboard to aggregate Google Analytics, Facebook, YouTube etc tracking data?

    - by Richard
    I'd like to see as much tracking data as possible about my online presence, in one single dashboard - so views/conversions from Google Analytics data, the performance of my Facebook campaigns via the Insights API, views/clicks from my YouTube campaigns, etc. This could be as simple as a graph with time on the x-axis, and key indicators from each source on the y-axis (conversions from Analytics, likes on Facebook, views on YouTube, etc). The idea is that I can see customer engagement with each source, over time. I can write my own such dashboard easily enough, but I wondered if there was something off-the-shelf that already did this. Apologies if this isn't the right forum for such a question - would appreciate tips for the best place to ask.

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • How can I make zaz save its profile data?

    - by RolandiXor - The Ice Man
    I've been playing Zaz recently as a time waster and stress beater, but it seems not to be actively maintained, and is not saving profile data under Ubuntu 12.10. It's getting to be more stressful than fun, because it keeps crashing, and under Unity, Gnome Shell, KDE (in other words, any opengl enabled WM) it makes the GPU lockup. How can I make it save the profile data or create a profile that I can manually place my level info in? I'm tired of playing the same levels over and over and not being able to start from the ones I've already passed. I am still yet to find any info on fixing this. Any clues?

    Read the article

  • UK Data Breaches Up by 10 fold in 10 years.

    - by TATWORTH
    At http://www.v3.co.uk/v3-uk/news/2201863/uk-data-breaches-rocket-by-1-000-percent-over-past-five-years there is an interesting report on the increase in data breaches reported in the UK.A lot of this increase may simply a change in legislation that has made reporting a statutory obligation.Some questions to ask yourself:Are server logs checked for untoward activity?Do you have a reporting policy if something is amiss?Did you design security in for the start of your application design?Do you log for example failed logons?Do you run tools to check for code integrity?Is my defense, a strategy of defense in depth?Do you realise that 60% of hack attacks are internal?Whilst SQL Injection is a problem that affects practically all application code platforms, within Microsoft Applications do you run FXCOP? Do you run any of the other free tools for checking?

    Read the article

  • Iomega Home Media Network Hard Drive 1TB Cloud Edition failed, data recovery?

    - by lonbon69
    I have a Iomega Home Media Network Drive Cloud Edition 1TB that started clicking and then displayed a failure LED code Power LED and Red LED. I removed the SATA drive and inserted in a 'All in 1 HDD Docking Station' and connected to laptop by USB - Laptop has Win 7 OS. The dock is seen as drive E but cannot access and says 0% data etc. The drive does spin up when I power the dock. Web searches say the drive has EXT3 file system and to use Ubuntu to access drive. I have now setup a dual boot laptop but still do not see the drive using ubuntu. Is there something else I need to do to get it recognised etc. I really would like to recover the data, any suggestions please?

    Read the article

  • Is a blob more efficient than a varchar for data that can be ANY size?

    - by BillyNair
    When setting up a database I want to use the most efficient data type for potentially fairly long data. Currently my project is to store song titles and thoughts pertaining to that song. Some titles might be 5 characters or longer than 100 characters and the thoughts could run pretty long. Is it more efficient to use a varchar set to 8000 or to use a blob? Is using a blob the same as a varchar, in that there is a set size it is allocated regardless of what it holds? or is it just a pointer and it doesn't really use much space on the table? Is there a certain set size of a blob in KB or is it expandable?

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • How can I fix a very broken Ubuntu installation without losing data?

    - by jredkai
    Okay guys, I was installing a program (I do not remember the name). When I did sudo apt-get update I was given missing dependencies. It told me to sudo apt-get install -f which deleted just about every dependency needed for Ubuntu, now I cannot log in or anything, now in GRUB it actually says Debian instead of Ubuntu. I have tons of important data in that partition. Can I some how use the live cd to fix this problem??? I mean like re-install without losing data. Any help is greatly appreciated!!!

    Read the article

  • Does the direction of storage make us bad data citizens?

    - by simonsabin
      My career started at a company where we hardly had email, the network was a 10base2 affair with cables running all around the office. You used floppy disks and the thought of a GB of data was absurd. You had to look after every byte and only keep what you really needed. Whilst the cost of the spinning disks gradually falls the cost and size of flash storage continues to plummet. The new Crucial SSD is £380 for 1TB I can now keep 128GB of data on a SD card the size of my finger. It only costs...(read more)

    Read the article

  • How to visualize real time data on Android? [closed]

    - by matarsak
    I want to build and android app that visualizes real time data (2D animation). I set up a UDP channel that get the data, now I want to visualize it. I know that I can use OpenGL ES, but after a few weeks, I dont think that I'm able to learn that. What about Android Processing? Could it be used for an extensive visualization task like this? or is it limited in some way? I've heard it's not hard learn. Any other options?

    Read the article

  • How to export SSIS to Microsoft Excel without additional software?

    - by Dr. Zim
    This question is long winded because I have been updating the question over a very long time trying to get SSIS to properly export Excel data. I managed to solve this issue, although not correctly. Aside from someone providing a correct answer, the solution listed in this question is not terrible. The only answer I found was to create a single row named range wide enough for my columns. In the named range put sample data and hide it. SSIS appends the data and reads metadata from the single row (that is close enough for it to drop stuff in it). The data takes the format of the hidden single row. This allows headers, etc. WOW what a pain in the butt. It will take over 450 days of exports to recover the time lost. However, I still love SSIS and will continue to use it because it is still way better than Filemaker LOL. My next attempt will be doing the same thing in the report server. Original question notes: If you are in Sql Server Integrations Services designer and want to export data to an Excel file starting on something other than the first line, lets say the forth line, how do you specify this? I tried going in to the Excel Destination of the Data Flow, changed the AccessMode to OpenRowSet from Variable, then set the variable to "YPlatters$A4:I20000" This fails saying it cannot find the sheet. The sheet is called YPlatters. I thought you could specify (Sheet$)(Starting Cell):(Ending Cell)? Update Apparently in Excel you can select a set of cells and name them with the name box. This allows you to select the name instead of the sheet without the $ dollar sign. Oddly enough, whatever the range you specify, it appends the data to the next row after the range. Oddly, as you add data, it increases the named selection's row count. Another odd thing is the data takes the format of the last line of the range specified. My header rows are bold. If I specify a range that ends with the header row, the data appends to the row below, and makes all the entries bold. if you specify one row lower, it puts a blank line between the header row and the data, but the data is not bold. Another update No matter what I try, SSIS samples the "first row" of the file and sets the metadata according to what it finds. However, if you have sample data that has a value of zero but is formatted as the first row, it treats that column as text and inserts numeric values with a single quote in front ('123.34). I also tried headers that do not reflect the data types of the columns. I tried changing the metadata of the Excel destination, but it always changes it back when I run the project, then fails saying it will truncate data. If I tell it to ignore errors, it imports everything except that column. Several days of several hours a piece later... Another update I tried every combination. A mostly working example is to create the named range starting with the column headers. Format your column headers as you want the data to look as the data takes on this format. In my example, these exist from A4 to E4, which is my defined range. SSIS appends to the row after the defined range, so defining A4 to E68 appends the rows starting at A69. You define the Connection as having the first row contains the field names. It takes on the metadata of the header row, oddly, not the second row, and it guesses at the data type, not the formatted data type of the column, i.e., headers are text, so all my metadata is text. If your headers are bold, so is all of your data. I even tried making a sample data row without success... I don't think anyone actually uses Excel with the default MS SSIS export. If you could define the "insert range" (A5 to E5) with no header row and format those columns (currency, not bold, etc.) without it skipping a row in Excel, this would be very helpful. From what I gather, noone uses SSIS to export Excel without a third party connection manager. Any ideas on how to set this up properly so that data is formatted correctly, i.e., the metadata read from Excel is proper to the real data, and formatting inherits from the first row of data, not the headers in Excel? One last update (July 17, 2009) I got this to work very well. One thing I added to Excel was the IMEX=1 in the Excel connection string: "Excel 8.0;HDR=Yes;IMEX=1". This forces Excel (I think) to look at all rows to see what kind of data is in it. Generally, this does not drop information, say for instance if you have a zip code then about 9 rows down you have a zip+4, Excel without this blanks that field entirely without error. With IMEX=1, it recognizes that Zip is actually a character field instead of numeric. And of course, one more update (August 27, 2009) The IMEX=1 will succeed importing data with missing contents in the first 8 rows, but it will fail exporting data where no data exists. So, have it on your import connection string, but not your export Excel connection string. I have to say, after so much fiddling, it works pretty well.

    Read the article

  • SSRS 2005: How do I make available varbinary data for download in a report?

    - by Angelo
    Hi, SSRS newbie question here... I have a table where one column is varbinary(max) data. I would like to make a report that makes this data available for download as a hyperlink so the user can just click on the item and get a file download dialog for the binary data. In this particular case, the binary data happens to be the content of old pdf files, but that shouldn't matter. I tried searching around but I can't find any pointers on how to do this. It seems to me that it should be possible. There are ways to display images in a report using varbinary data, so it makes sense that one should be able to make arbitrary binary data downloadable on a report, right?

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >