Search Results

Search found 72319 results on 2893 pages for 'file explorer'.

Page 793/2893 | < Previous Page | 789 790 791 792 793 794 795 796 797 798 799 800  | Next Page >

  • Can't pipe or redirect cygwin grep output

    - by Thomas
    How do I get grep to work properly in a regular cmd.exe? > grep -o 'ProductVersion\".*\".*\"' foo.txt | grep -o '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' foo.txt:ProductVersion" Value="59.59.140.59" grep: |: No such file or directory grep: grep: No such file or directory grep: [0-9]+\.[0-9]+\.[0-9]+\.[0-9]+: No such file or directory and > grep -o 'ProductVersion\".*\".*\"' foo.txt >> blah.txt foo.txt:ProductVersion" Value="59.59.140.59" grep: >>: No such file or directory grep: blah.txt: No such file or directory

    Read the article

  • Setting XSL-FO XML Schema in Visual Studio

    - by Lukasz Kurylo
    I'm playing lately with an XSL-FO for generating a pdf documents. XSL-FO has a long list of available tags and attributes, which for a new guy who want to create a simple document is a nightmare to find a proper one. Fortunatelly we can set an schema for XSL-FO, so will result in acquire a full intellisense in VS. For a simple *.fo file, we can set the path to the schema directly in file: <?xml version="1.0" encoding="utf-8"?> <fo:root       xmlns:fo="http://www.w3.org/1999/XSL/Format"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xsi:schemaLocation=" http://www.w3.org/1999/XSL/Format http://www.xmlblueprint.com/documents/fop.xsd"> ...   We can of course use the build in VS XML Schemas selector. To use it, we must copy the schema file to the Schemas catalog (defaut path for VS2012 is C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas). Then we can go to Properties of the opened xml/xslt file and set the new added schema to file:                 From now, we should have an enable intellisense as shown below: .

    Read the article

  • OData &ndash; The easiest service I can create: now with updates

    - by Jon Dalberg
    The other day I created a simple NastyWord service exposed via OData. It was read-only and used an in-memory backing store for the words. Today I’ll modify it to use a file instead of a list and I’ll accept new nasty words by implementing IUpdatable directly. The first thing to do is enable the service to accept new entries. This is done at configuration time by adding the “WriteAppend” access rule: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Next I placed a file, NastyWords.txt, in the “App_Data” folder and added a few *choice* words to start. This required one simple change to our NastyWordDataSource.cs file: 1: public NastyWordsDataSource() 2: { 3: UpdateFromSource(); 4: } 5:   6: private void UpdateFromSource() 7: { 8: var words = File.ReadAllLines(pathToFile); 9: NastyWords = (from w in words 10: select new NastyWord { Word = w }).AsQueryable(); 11: }   Nothing too shocking here, just reading each line from the NastyWords.txt file and exposing them. Next, I implemented IUpdatable which comes with a boat-load of methods. We don’t need all of them for now since we are only concerned with allowing new values. Here are the methods we must implement, all the others throw a NotImplementedException: 1: public object CreateResource(string containerName, string fullTypeName) 2: { 3: var nastyWord = new NastyWord(); 4: pendingUpdates.Add(nastyWord); 5: return nastyWord; 6: } 7:   8: public object ResolveResource(object resource) 9: { 10: return resource; 11: } 12:   13: public void SaveChanges() 14: { 15: var intersect = (from w in pendingUpdates 16: select w.Word).Intersect(from n in NastyWords 17: select n.Word); 18:   19: if (intersect.Count() > 0) 20: throw new DataServiceException(500, "duplicate entry"); 21:   22: var lines = from w in pendingUpdates 23: select w.Word; 24:   25: File.AppendAllLines(pathToFile, 26: lines, 27: Encoding.UTF8); 28:   29: pendingUpdates.Clear(); 30:   31: UpdateFromSource(); 32: } 33:   34: public void SetValue(object targetResource, string propertyName, object propertyValue) 35: { 36: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 37: }   I use a simple list to contain the pending updates and only commit them when the “SaveChanges” method is called. Here’s the order these methods are called in our service during an insert: CreateResource – here we just instantiate a new NastyWord and stick a reference to it in our pending updates list. SetValue – this is where the “Word” property of the NastyWord instance is set. SaveChanges – get the list of pending updates, barfing on duplicates, write them to the file and clear our pending list. ResolveResource – the newly created resource will be returned directly here since we aren’t dealing with “handles” to objects but the actual objects themselves. Not too bad, eh? I didn’t find this documented anywhere but a little bit of digging in the OData spec and use of Fiddler made it pretty easy to figure out. Here is some client code which would add a new nasty word: 1: static void Main(string[] args) 2: { 3: var svc = new ServiceReference1.NastyWordsDataSource(new Uri("http://localhost.:60921/NastyWords.svc")); 4: svc.AddToNastyWords(new ServiceReference1.NastyWord() { Word = "shat" }); 5:   6: svc.SaveChanges(); 7: }   Here’s all of the code so far for to implement the service: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Services; 4: using System.Data.Services.Common; 5: using System.Linq; 6: using System.ServiceModel.Web; 7: using System.Web; 8: using System.IO; 9: using System.Text; 10:   11: namespace ONasty 12: { 13: [DataServiceKey("Word")] 14: public class NastyWord 15: { 16: public string Word { get; set; } 17: } 18:   19: public class NastyWordsDataSource : IUpdatable 20: { 21: private List<NastyWord> pendingUpdates = new List<NastyWord>(); 22: private string pathToFile = @"path to your\App_Data\NastyWords.txt"; 23:   24: public NastyWordsDataSource() 25: { 26: UpdateFromSource(); 27: } 28:   29: private void UpdateFromSource() 30: { 31: var words = File.ReadAllLines(pathToFile); 32: NastyWords = (from w in words 33: select new NastyWord { Word = w }).AsQueryable(); 34: } 35:   36: public IQueryable<NastyWord> NastyWords { get; private set; } 37:   38: public void AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded) 39: { 40: throw new NotImplementedException(); 41: } 42:   43: public void ClearChanges() 44: { 45: pendingUpdates.Clear(); 46: } 47:   48: public object CreateResource(string containerName, string fullTypeName) 49: { 50: var nastyWord = new NastyWord(); 51: pendingUpdates.Add(nastyWord); 52: return nastyWord; 53: } 54:   55: public void DeleteResource(object targetResource) 56: { 57: throw new NotImplementedException(); 58: } 59:   60: public object GetResource(IQueryable query, string fullTypeName) 61: { 62: throw new NotImplementedException(); 63: } 64:   65: public object GetValue(object targetResource, string propertyName) 66: { 67: throw new NotImplementedException(); 68: } 69:   70: public void RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved) 71: { 72: throw new NotImplementedException(); 73: } 74:   75: public object ResetResource(object resource) 76: { 77: throw new NotImplementedException(); 78: } 79:   80: public object ResolveResource(object resource) 81: { 82: return resource; 83: } 84:   85: public void SaveChanges() 86: { 87: var intersect = (from w in pendingUpdates 88: select w.Word).Intersect(from n in NastyWords 89: select n.Word); 90:   91: if (intersect.Count() > 0) 92: throw new DataServiceException(500, "duplicate entry"); 93:   94: var lines = from w in pendingUpdates 95: select w.Word; 96:   97: File.AppendAllLines(pathToFile, 98: lines, 99: Encoding.UTF8); 100:   101: pendingUpdates.Clear(); 102:   103: UpdateFromSource(); 104: } 105:   106: public void SetReference(object targetResource, string propertyName, object propertyValue) 107: { 108: throw new NotImplementedException(); 109: } 110:   111: public void SetValue(object targetResource, string propertyName, object propertyValue) 112: { 113: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 114: } 115: } 116:   117: public class NastyWords : DataService<NastyWordsDataSource> 118: { 119: // This method is called only once to initialize service-wide policies. 120: public static void InitializeService(DataServiceConfiguration config) 121: { 122: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 123: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 124: } 125: } 126: } Next time we’ll allow removing nasty words. Enjoy!

    Read the article

  • FTP "PUT" fails from Virtual Machine, but not host PC: 504 Command not implemented for that paramete

    - by BrianH
    I have an FTP Script I'm using to automate a file transfer. The transfer works fine on my PC (XP SP2), but when I try and run it on a VM on my PC (XP SP2), the "put" commands gives off: 504 Command not implemented for that parameter. FTP File: open [ftp site] [username] [password] cd [directory on FTP server] binary hash put ..\[subfolder1]\[Subfolder2]\[subfolder3]\[filename] bye The FTP site/server is around the world, and not under my control. From what I understand of a 504, that means the command should NEVER work, but since the same script DOES work on my PC (hosting the VM), that eliminates syntax, file naming, etc. The put command when triggered from the VM, actually creates a 0 length file on the target FTP server, but doesn't populate the file.

    Read the article

  • Oracle has some very helpful and free code...I think

    - by Casey
    I found that some of the code that Oracle uses is very useful so I don't have to re-invent the wheel. Given this is at the top of the file where the code in question is: /* * Copyright (c) 1997, 2006, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ If I leave the text intact, put it in my C++ header, and credit oracle for each method, and package the source into a static library...is it still a no-no?

    Read the article

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

  • Is there ever a reason to do all an object's work in a constructor?

    - by Kane
    Let me preface this by saying this is not my code nor my coworkers' code. Years ago when our company was smaller, we had some projects we needed done that we did not have the capacity for, so they were outsourced. Now, I have nothing against outsourcing or contractors in general, but the codebase they produced is a mass of WTFs. That being said, it does (mostly) work, so I suppose it's in the top 10% of outsourced projects I've seen. As our company has grown, we've tried to take more of our development in house. This particular project landed in my lap so I've been going over it, cleaning it up, adding tests, etc etc. There's one pattern I see repeated a lot and it seems so mindblowingly awful that I wondered if maybe there is a reason and I just don't see it. The pattern is an object with no public methods or members, just a public constructor that does all the work of the object. For example, (the code is in Java, if that matters, but I hope this to be a more general question): public class Foo { private int bar; private String baz; public Foo(File f) { execute(f); } private void execute(File f) { // FTP the file to some hardcoded location, // or parse the file and commit to the database, or whatever } } If you're wondering, this type of code is often called in the following manner: for(File f : someListOfFiles) { new Foo(f); } Now, I was taught long ago that instantiated objects in a loop is generally a bad idea, and that constructors should do a minimum of work. Looking at this code it looks like it would be better to drop the constructor and make execute a public static method. I did ask the contractor why it was done this way, and the response I got was "We can change it if you want". Which was not really helpful. Anyway, is there ever a reason to do something like this, in any programming language, or is this just another submission to the Daily WTF?

    Read the article

  • Unit test: How best to provide an XML input?

    - by TheSilverBullet
    I need to write a unit test which validates the serialization of two attributes of an XML(size ~ 30 KB) file. What is the best way to provide an input for this test? Here are the options I have considered: Add the file to the project and use a file reader Pass the contents of the XML as a string Create the XML through a program and pass it Which is my best option and why? If there is another way which you think is better, I would love to hear it.

    Read the article

  • How do I query the gvfs metadata for a specific attribute?

    - by Mathieu Comandon
    A nice feature in evince is that when you close the program and later reopen the same pdf, it automatically jumps to the page you were reading. The problem I have is that I often read ebooks on several computers and I have to find were I was on the last computer I was reading the pdf. I think syncing these bookmarks in UbuntuOne would be a killer feature for people like me who read pdfs on different computers. By investigating a bit, I found where evince was storing this data, it's in the gvfs metadata and it can be accessed for a particular document by typing gvfs-ls -a "metadata::evince::page" myEbook.pdf Rather that querying a particular file, I'd like to query the whole metadata file (located in ~/.local/share/gvfs-metadata/home for the home directory) for any file where this particular attribute is set to some value. The biggest issue is that gvfs metadata and stored in binary files and we all know it's not easy to get something out of a binary file. So, do you know any way to query the gvfs metadata for some attribute?

    Read the article

  • Creating a Template Like System in cPanel

    - by clifgray
    I am creating a medium sized website using cPanel and their File Manager system and the majority of my pages are going to be the same with a different title and content section and I wanted to see if there is a system for making one general template file and then having all the other pages inherit from that file so all I have to do is have a content and title section and the rest of the links, headers, and whatnot can be changed throughout the site by just changing one file. Is there anything like this? I have used Jinaj2 in python and a few other systems for other server scripting languages but I am not sure how to implement it with cPanel.

    Read the article

  • When attempting to install ubuntu 12.04 from CD, I am stuck on black streen with "loading bootlogo..."

    - by Jessica K
    I downloaded Ubuntu 12.04 to my desktop and burned to a CD using Infra Recorder and instructions on ubuntu website. Restarted PC to boot from CD receive black screen with "Loading bootlogo..." then nothing happens and I have to restart with windows. The CD seems to be correct. Folders include .disk, boot, casper, dists, install, isolinux, pics, pool, preseed, autorun file, md5sum text file, readme.diskdefines file, wubi app. System Information Operating System: Windows Vista™ Home Premium (6.0, Build 6002) Service Pack 2 (6002.vistasp2_gdr.120824-0336) System Manufacturer: TOSHIBA System Model: Satellite L305 BIOS: Default System BIOS Processor: Intel(R) Pentium(R) Dual CPU T2390 @ 1.86GHz (2 CPUs), ~1.9GHz Memory: 3062MB RAM Page File: 1553MB used, 4772MB available Windows Dir: C:\Windows DirectX Version: DirectX 11 DX Setup Parameters: Not found DxDiag Version: 7.00.6002.18107 32bit Unicode Drive: D: Model: PIONEER DVD-RW DVRKD08L ATA Device Driver: c:\windows\system32\drivers\cdrom.sys, 6.00.6002.18005 (English), 4/11/2009 00:39:17, 67072 bytes

    Read the article

  • Files not refreshing after uploaded and overwritten via ftp

    - by guisasso
    I have been having some trouble with updating my website this morning. After editing some files, in special, a css file, and uploading it to the server, i would notice that the changes wouldn't happen, as if the file had not been overwritten. File size on my local machine would be 3.244 kb and on the server 3.080 kb. Even after deleting the whole folder itself, and uploading everything again, same error. Answer: Cloudfare.

    Read the article

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

  • How to create a shared folder using command line on a server

    - by sadmicrowave
    After following the tutorial here I ran into a problem. Here is what I did. On my server I installed nfs-kernel-server and edited the /etc/exports file to include the folder I want to share: /var *(rw,sync) On my client machine I edited my fstab file to include share: //128.251.xxx.xxx/var/ ~/uslonsweb003 nfs #username=[username],password=[password], 0 0 Entered command: sudo mount -a which gives this error: mount.nfs: remote share not in 'host:dir' format Where did I go wrong with this setup? Also if there is a better way (using command line) to setup a folder share on an Ubuntu 10.10 server that will be accessed by other linux and windows machines please let me know. UPDATE: The mapped drive is now not letting me create,edit,delete files or folders (readonly access) my configuration is as follows: client fstab file: 128.251.xxx.xxx:/var /home/coreyf/uslonsweb003 nfs rw,hard,intr, 0 0 server exports file: /var *(rw,no_root_squash,sync,no_subtree_check) UPDATE 2: Using Allans solution my drive mounted correctly however after putting rw,intr as my additional parameters I cannot create, edit and delete folders/files.

    Read the article

  • MIT and copyright

    - by Petah
    I am contributing to a library that is licensed under the MIT license. In the license and in each class file it has a comment at the top saying: Copyright (c) 2011 Joe Bloggs <[email protected]> I assume that he owns the copyright to the file, and can change the license of that file as he sees fit. If I contribute to the library with a new class entirely write by me, can I claim copyright of that file. And put: Copyright (c) 2011 Petah Piper <[email protected]> at the top?

    Read the article

  • unable to mount hard drives

    - by mixtyplix
    I am currently trying to backup data from my computer's hard drive to an external one while live-booting to Ubuntu from a USB, and I've encountered a bit of a problem. You see, I cannot mount my hard drives. Whenever I try, I always get one of two errors: fuse: failed to access mountpoint [file I'm trying to mount to]: No such file or directory or Error mounting: mount exited with exit code 21: mount: according to mtab, [drive] is already mounted on [file I'm trying to mount to] How can I fix this?

    Read the article

  • How is intermediate data organized in MapReduce?

    - by Pedro Cattori
    From what I understand, each mapper outputs an intermediate file. The intermediate data (data contained in each intermediate file) is then sorted by key. Then, a reducer is assigned a key by the master. The reducer reads from the intermediate file containing the key and then calls reduce using the data it has read. But in detail, how is the intermediate data organized? Can a data corresponding to a key be held in multiple intermediate files? What happens when there is too much data corresponding to one key to be held by a single file? In short, how do intermediate partitions differ from intermediate files and how are these differences dealt with in the implementation?

    Read the article

  • What would cause different rates of packet loss between client and server in UDP?

    - by febreezey
    If I've implemented a reliable UDP file transfer protocol and I have a file that deliberately drops a percentage of packets when I transmit, why would it be more evident that transmission time increases as the packet loss percentage increases going from the client to server as opposed from the server to the client? Is this something that can be explained as a result of the protocol? Here are my numbers from two separate experiments. I kept the max packet size to 500 Bytes and the opposite direction packet loss to 5% with a 1 Megabyte file: Server to Client loss Percentage varied: 1 MB file, 500 b segments, client to server loss 5% 1% : 17253 ms 3% : 3388 ms 5% : 7252 ms 10% : 6229 ms 11% : 12346 ms 13% : 11282 ms 15% : 9252 ms 20% : 11266 ms Client to Server loss percentage varied 1 MB file, 500 b segments, server to client loss 5% 1%: 4227 ms 3%: 4334 ms 5%: 3308 ms 10%: 31350 ms 11%: 36398 ms 13%: 48436 ms 15%: 65475 ms 20%: 120515 ms You can clearly see an exponential increase in the client to server group

    Read the article

  • DataSets and XML - The Simplistic Approach

    One of the first ways I learned how to read xml data from external data sources was by using a DataSet’s ReadXML function. This function takes file path for an XML document and then converts it to a Dataset. This functionality is great when you need a simple way to process an XML document.  In addition the DataSet object also offers a simple way to save data in an xml format by using the WriteXML function. This function saves the current data in the DataSet to an XML file to be used later. DataSet ds  = New DataSet();String filePath = “http://www.yourdomain.com/someData.xml”;String fileSavePath = “C:\Temp\Test.xml”//Read file for this locationds.readxml(filePath);//Save file to this locationds.writexml(fileSavePath); I have used the ReadXML function before when consuming data from external Rss feeds to display on one of my sites.  It allows me to quickly pull in data from external sites with little to no processing. Example site: MyCreditTech.com

    Read the article

  • All my Ubuntu VMs have apt-get update problems

    - by kashani
    I'm running Virtualbox 4.1 on an x86_64 Windows 7 host. I've got a collection of 12.04 and 10.04 LTS VMs I use to create debs for work. In the last week I started noticing problems on the 12.04 VMs. Tried the usual apt-get clean bit which didn't help. I rolled a new 11.10 VM for testing a Worpress upgrade. This VM has never been able to run apt-get update without errors. The interesting errors look like this: Get: 8 http://security.ubuntu.com oneiric-security/main Translation-en_US [344 B] 14% [7 Sources 48686/877 kB 6%] [Waiting for headers]bzip2: (stdin) is not a bzip2 file. Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://security.ubuntu.com oneiric-security/universe Translation-en 22% [7 Sources 127526/877 kB 15%] [Waiting for headers]/usr/bin/xz: (stdin): File format not recognized and ends with /usr/bin/xz: (stdin): File format not recognized Ign http://us.archive.ubuntu.com oneiric/main Translation-en_US Ign http://us.archive.ubuntu.com oneiric-updates/main Translation-en_US Fetched 18.5 MB in 47s (392 kB/s) W: GPG error: http://us.archive.ubuntu.com oneiric InRelease: File /var/lib/apt/lists/partial/us.archive.ubuntu.com_ubuntu_dists_oneiric_InRelease doesn't start with a clearsigned message W: GPG error: http://security.ubuntu.com oneiric-security InRelease: File /var/lib/apt/lists/partial/security.ubuntu.com_ubuntu_dists_oneiric-security_InRelease doesn't start with a clearsigned message xv-utils, lzma, etc are all installed. I've reinstalled the VM from scratch three times and up at the same point.

    Read the article

  • Setting up dual monitors, Xorg.conf issues

    - by JTS
    I just got a new computer (W520, Graphics card nVidia GF106 [Quadro 2000]) and installed ubuntu on it using wubi. I have everything working, so I wanted to set it up to be able to use two monitors with an extended screen. I figured I had to edit Xorg.conf, but the file didnt exist. So I tried to create it by booting in recovery mode, and executing Xorg -configure but I am getting these errors: (EE) Failed to load module "vmwgfx" (module does not exist, 0) (EE) vmware: Please ignore the above warnings about not being able to load module/driver vmwgfx (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) [drm] No DRICreatedPCIBusID symbol Number of created screens does not match number of detected devices. Configuration failed. ddxSigGiveUp: Closing log Any idea how I can get Xorg -configure to work, so that I can have an xorg.conf file that I can edit to enable twinview? EDIT: Another way I could ask the same question to solve this problem is, why can't I boot with an xorg.conf file generated by nvidia-xconfig? Is there something in the generated xorg.conf file that might need editing?

    Read the article

  • How do I dynamically reload content files?

    - by Kikaimaru
    Is there a relatively simple way to dynamically reload content files, such as effect files? I know I can do the following: Detect change of file Run content pipeline to rebuild that specific file Unload ALL content that was loaded Load all content And use double references to reference content files. The problem is with step 3 (and step 2 isn't that nice either). I need to unload everything because if I have model Hero.x which references Model.fx effect, and I change the Model.fx file, I need to reload the Hero.x file which will then call LoadExternalReference on Model.fx. Has someone managed to make this work without rewriting the whole ContentManager (and every ContentReader) and tracking calls to LoadExternalReference?

    Read the article

  • Error while loading shared libraries - libwebsock

    - by kittyPL
    Im trying to setup libwebsock, simple C websocket library. I followed the installation procedure from INSTALL file, everything went fine. Im able to compile test program given in the examples. But when I want to run my executable, wild error appears: ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory I checked /usr/local/lib twice, libwebsock.so.1 exists and is doing very well. I also tried copying the lib to the echo folder (so its placed next to binary), still same error. It's quite funny for me: shadowz@Ubu:~/WebSocket$ ls echo echo.c echo.cpp libwebsock.so.1 shadowz@Ubu:~/WebSocket$ ./echo ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory Any suggestions? Im running out of ideas...

    Read the article

  • PLEASE HELP RECOVER MY MINT14 BOOT/GRUB [closed]

    - by C2940680
    Hi I have following from [bootinfoscript][1] v0.61 [1Apr-2012]: I tried to do several time to do a boot-repair from YannUbuntu. However, I get error rebooting into my Linux Mint 14 Cinnamon. I have partitioned /boot, /, /home partitions. Could I still use /home partition if I recover files on to external USB and then reformatting the whole hard drive, repartition and use /home from USB drive which I have saved before? Also, I tried to install Qubes 2beta and then deleted the partition where it was stored. Also, also {my bad} I tried to copy the BOOT.CFG from sda6 to sda1 and sda2. All answers appreciated in advance. sda1: __________________________________________ File system: ext2 Boot sector type: - Boot sector info: Operating System: Boot files: /grub/grub.cfg sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sda6: __________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Linux Mint 14 Nadia Boot files: /boot/grub/grub.cfg /etc/fstab

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

< Previous Page | 789 790 791 792 793 794 795 796 797 798 799 800  | Next Page >