Search Results

Search found 7697 results on 308 pages for 'eb dev'.

Page 90/308 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • Debian doesn't boot after removing secondary hard drive

    - by Daveel
    In the beginning I had Debian 6 running on one hard drive (/dev/sda1). Then I decided to keep all my stuff(pics, videos, etc..) in another slave hard drive (/dev/sdb1). So sda1 has Debian OS sdb1 doesn't contain any OS files I have made it to mount automatically by adding a row in /etc/fstab (UUID and directory to mount to) Time have passed and when I tried to change that secondary hard drive with another hard drive with bigger capacity, for some reason Debian won't boot (just itself sda1) after removing secondary hard drive (sdb1) But if I plug sdb1 back, it boots just fine. I tried to comment line out from /etc/fstab, so it doesn't mount And also did update-grub after umount /dev/sdb1 What's the right way to remove hard drive secondary hard drive?

    Read the article

  • Doing "text mode 'splash' game" during boot.

    - by Vi
    Sometimes I want to do something (for example, playing a simple text-mode game) while the system is booting up. This is especially useful when lengthy reiserfs transaction replays are happening. Current hacky way of doing it is: Put the program on initramfs. Before running /sbin/init, "openvt 2 /my/program". Turn off messages from kernel (sysrq 0) Override /dev/console with /dev/null (to prevent boot messages). The problems are: There are STILL some messages interfering with program output. I can't see boot messages by switching to that virtual terminal back. After finishing the boot sequence, /dev/tty2 ends up being attached both to getty and my program. How to do it properly without of running graphical splashes? The system is Linux Debian Squeeze, no dependency based sysv scripts.

    Read the article

  • route command not being executed in rc.local

    - by user1265478
    I tried adding route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0 to my Fedora rc.local file, but it's not being executed when Fedora boots up. What can I do to fix this? update: i changed to the full path cmd in my rc.local /sbin/route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0 but its not being executed. I change it to sudo /sbin/route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0 it still doesnt work although it works when i manually enter it in the terminal.

    Read the article

  • Migrating a running production server to Xen, unmodified as a second HDD?

    - by DaveCol
    I have a production server which I am looking to virtualize via XEN. For this purpose I have purchased a new Sata HDD, in which I have promptly installed CentOS 5.5 x64 with XEN server installed. Now I have two HDD: /dev/sda1 running as host with Xen Server Installed; and /dev/sda2 which is the HDD where the original server has installed. Is it posible to use /dev/sda2 to work as GuestOS in a xen server? Would I have to modify its kernel? Thank you for any input

    Read the article

  • Bootstrapped Ubuntu 12.04 EC2 instance. Where to find log?

    - by nocode
    So I bootstrapped a shell script to install and run a bunch of tasks. Looks like the it ran for the most part, but I added one part and that was formatting an extra EBS volume. Pretty straightforward: mkfs.ext4 /dev/xvdf mkdir –m 000 /vol01 echo “/dev/xvdf /vol01 auto noatime 0 0” | sudo tee –a /etc/fstab sudo mount /vol01 I was able to install MongoDB, NGINX and Forever. I selected to use /dev/xdvf in the AWS console and see it. The 3rd line is not in fstab either. I've searched through various logs in /var/log/ but I don't really see much indicating the execution of the bootstrap. Logs that I see and looked through: auth.log boot.log dmesg dpkg.log syslog udev

    Read the article

  • How to make RHEL have persistent local hdd name?

    - by Mxx
    I have 2 identical Dell R720 servers running identical Oracle Enterprise Linux(RHEL)6.4. Both servers (supposedly) configured in exactly the same way. However, one of the servers is behaving differently. Every other reboot its local HDD name(and related partitions) flip from /dev/sda to /dev/sdj. This is problematic because both servers are configured for multipathd, and if this flip happens their config does not match and Oracle DB(or its clusterware) complains that nodes are not configured identically. Why does one server has a consistent device names while the other server keeps flipping back and forth? How can I make local hdd to consistently be /dev/sda? I suspect this might have something to do with udev but I'm not sure.

    Read the article

  • how to recover data from my disk - I accidentally copied with dd an iso on it

    - by sijoune
    I wanted to create a bootable usb from an iso image and i accidentally put as the output of the dd, instead of my usb drive, one of my hard disks. The iso was 3,3 GB and my disk is 1TB! And it was almost full. Can i at least restore the data that has not been overwritten? Right now i can't even mount it. I get this error: Error mounting /dev/sdd1 at /media/main/UDF Volume: Command-line `mount -t "udf" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,iocharset=utf8,umask=0077" "/dev/sdd1" "/media/main/UDF Volume"' exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Also since i know which filesystem my disk used if i reformat it to this filesystem is there any chance i can mount it and retrieve the rest of the files?

    Read the article

  • Can You Specify Where LVM Snapshots Are (Initially) Stored?

    - by bottles
    Disclaimer: this is my first time using lvm. Upon RTFM, it appears that LVM snapshots are automatically stored in the same directory as the original logical volume. In my case, that would mean the /dev directory. This isn't very nice, because there's not enough disk space in there for me to store a large snapshot. So when I run a command like lvcreate --size 1G --snapshot --name snapshot /dev/lvmData/usr, I need an additional 1G of space free in /dev? Is there any way to specify a different directory in which to store my snapshot?

    Read the article

  • Effectively managing crontab

    - by jakenoble
    My crontab looks something like this; 1 * * * * /var/www/cron/site1.sh > /dev/null 2>&1 0 * * * * /var/www/cron/site2.sh > /dev/null 2>&1 3 * * * * /var/www/cron/site3.sh > /dev/null 2>&1 This works great and lets me place all the nasty little script calls into one place, rather than making crontab harder to read than it already is. But, this fails massively when site2.sh needs one script to run once a day, another to run once a week and another to run every 5 minutes. And of course it gets worse as new scripts get added with different timings. Is there a better way? EDIT By better I mean making it more manageable, having a large crontab is not manageable, but neither is having scripts all over the place. Not a GUI necessarily.

    Read the article

  • SVN - managing .htaccess file on various instances

    - by user178087
    I have a question and need a suggestion. We have to manage .htaccess file on three different instances - dev, qa and prod. Currently we have tortoise configured for scm. The issue we are facing is that .htaccess for dev & qa is different from the prod. At present, we have to manually merge differences from dev .htaccess to prod .htaccess. Is there any alternative way of managing this file without this manual process since it is error prone. Any suggestions in this regard will be highly appreciated

    Read the article

  • Mac Terminal - Color Co-ordinated

    - by Biscuit128
    I would like to create a couple of short cuts on my iMac which ssh on to my dev box and on to my prod box. I would like my dev connection to use the settings something similar to home-brew (green text black background) and my prod connection to use red text black background) - How can this be configured so that this is possible. Would I need multiple bashrc files one for prod and one for dev and source individually? If this is the case, how can i get the profiles to be sources as soon as i double click the shortcuts? Thanks

    Read the article

  • Provider<HttpSession> not getting injected

    - by user1033715
    I am using gwt dispatch to communicate and get data from server to client. In order to get user specific data I want to store the user object in httpsession and access application specific data from servlet context but when I inject Provider<HttpSession> when the handler execute is called but the dispatchservlet the provider is null i.e it does not get injected. following is the code from my action handler @Inject Provider<HttpSession> provider; public ReadEntityHandler() { } @Override public EntityResult execute(ReadEntityAction arg0, ExecutionContext arg1) throws DispatchException { HttpSession session = null; if (provider == null) System.out.println("httpSession is null.."); else { session = provider.get(); System.out.println("httpSession not null.."); } System.out.println("reached execution"); return null; } and my Dispatch servlet @Singleton public class ActionDispatchServlet extends RemoteServiceServlet implements StandardDispatchService { private Dispatch dispatch; public ActionDispatchServlet() { InstanceActionHandlerRegistry registry = new DefaultActionHandlerRegistry(); registry.addHandler(new ReadEntityHandler()); dispatch = new SimpleDispatch(registry); } @Override public Result execute(Action<?> action) throws DispatchException { try { return dispatch.execute(action); } catch (RuntimeException e) { log("Exception while executing " + action.getClass().getName() + ": " + e.getMessage(), e); throw e; } } } when I try to inject the ReadEntityHandler it throws the following exception [WARN] failed guiceFilter com.google.inject.ProvisionException: Guice provision errors: 1) Error injecting constructor, java.lang.NullPointerException at com.ensarm.wikirealty.server.service.ActionDispatchServlet.<init>(ActionDispatchServlet.java:22) at com.ensarm.wikirealty.server.service.ActionDispatchServlet.class(ActionDispatchServlet.java:22) while locating com.ensarm.wikirealty.server.service.ActionDispatchServlet 1 error at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:834) at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:856) at com.google.inject.servlet.ServletDefinition.init(ServletDefinition.java:74) at com.google.inject.servlet.ManagedServletPipeline.init(ManagedServletPipeline.java:84) at com.google.inject.servlet.ManagedFilterPipeline.initPipeline(ManagedFilterPipeline.java:106) at com.google.inject.servlet.GuiceFilter.init(GuiceFilter.java:168) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:593) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1220) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:513) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) at com.google.gwt.dev.shell.jetty.JettyLauncher$WebAppContextWithReload.doStart(JettyLauncher.java:468) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.handler.RequestLogHandler.doStart(RequestLogHandler.java:115) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:222) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39) at com.google.gwt.dev.shell.jetty.JettyLauncher.start(JettyLauncher.java:672) at com.google.gwt.dev.DevMode.doStartUpServer(DevMode.java:509) at com.google.gwt.dev.DevModeBase.startUp(DevModeBase.java:1068) at com.google.gwt.dev.DevModeBase.run(DevModeBase.java:811) at com.google.gwt.dev.DevMode.main(DevMode.java:311) Caused by: java.lang.NullPointerException at net.customware.gwt.dispatch.server.DefaultActionHandlerRegistry.addHandler(DefaultActionHandlerRegistry.java:21) at com.ensarm.wikirealty.server.service.ActionDispatchServlet.<init>(ActionDispatchServlet.java:24) at com.ensarm.wikirealty.server.service.ActionDispatchServlet$$FastClassByGuice$$e0a28a5d.newInstance(<generated>) at com.google.inject.internal.cglib.reflect.FastConstructor.newInstance(FastConstructor.java:40) at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:58) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:200) at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:43) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:878) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.Scopes$1$1.get(Scopes.java:64) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40) at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:825) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:871) at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:821) ... 25 more

    Read the article

  • Push notification not received in ios5 device

    - by Surender Rathore
    I am sending push notification on device which has iOS 5.0.1, but notification is not received on iOS5 device. If i try to send notification on iOS4.3.3 device, it received on this device. My server is developed in c# and apple push notification certificate is used in .p12 format. My code to register the notification is - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Override point for customization after application launch. // Set the navigation controller as the window's root view controller and display. deviceToken = nil; self.window.rootViewController = self.navigationController; [self.window makeKeyAndVisible]; [[UIApplication sharedApplication] enabledRemoteNotificationTypes]; [[UIApplication sharedApplication] registerForRemoteNotificationTypes: UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeSound | UIRemoteNotificationTypeAlert]; return YES; } - (void)application:(UIApplication *)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)devToken { UIAlertView *alertView=[[UIAlertView alloc]initWithTitle:@"Succeeded in registering for APNS" message:@"Sucess" delegate:self cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alertView show]; [alertView release]; deviceToken = [devToken retain]; NSMutableString *dev = [[NSMutableString alloc] init]; NSRange r; r.length = 1; unsigned char c; for (int i = 0; i < [deviceToken length]; i++) { r.location = i; [deviceToken getBytes:&c range:r]; if (c < 10) { [dev appendFormat:@"0%x", c]; } else { [dev appendFormat:@"%x", c]; } } NSUserDefaults *def = [NSUserDefaults standardUserDefaults]; [def setObject:dev forKey:@"DeviceToken"]; [def synchronize]; NSLog(@"Registered for APNS %@\n%@", deviceToken, dev); [dev release]; } - (void)application:(UIApplication *)application didFailToRegisterForRemoteNotificationsWithError:(NSError *)error { NSLog(@"Failed to register %@", [error localizedDescription]); UIAlertView *alertView=[[UIAlertView alloc]initWithTitle:@"FAILED" message:@"Fail" delegate:self cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alertView show]; [alertView release]; deviceToken = nil; } - (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo { NSLog(@"Recieved Remote Notification %@", userInfo); NSDictionary *aps = [userInfo objectForKey:@"aps"]; NSDictionary *alert = [aps objectForKey:@"alert"]; //NSString *actionLocKey = [alert objectForKey:@"action-loc-key"]; NSString *body = [alert objectForKey:@"body"]; // NSMutableArray *viewControllers = [NSMutableArray arrayWithArray:[self.navigationController viewControllers]]; // // if ([viewControllers count] > 2) { // NSRange r; // r.length = [viewControllers count] - 2; // r.location = 2; // [viewControllers removeObjectsInRange:r]; // // MapViewController *map = (MapViewController*)[viewControllers objectAtIndex:1]; // [map askDriver]; // } // // [self.navigationController setViewControllers:viewControllers]; // [viewControllers release]; //NewBooking,"BookingId",Passenger_latitude,Passenger_longitude,Destination_latitude,Destination_longitude,Distance,Time,comments NSArray *arr = [body componentsSeparatedByString:@","]; if ([arr count]>0) { MapViewController *map = (MapViewController*)[[self.navigationController viewControllers] objectAtIndex:1]; [map askDriver:arr]; [self.navigationController popToViewController:[[self.navigationController viewControllers] objectAtIndex:1] animated:YES]; } //NSString *sound = [userInfo objectForKey:@"sound"]; //UIAlertView *alertV = [[UIAlertView alloc] initWithTitle:actionLocKey // message:body delegate:nil // cancelButtonTitle:@"Reject" // otherButtonTitles:@"Accept", nil]; // // [alertV show]; // [alertV release]; } Can anyone help me out with this issue. Why notification received on iOS4.3 device but not on iOS5? Thank you very much!!! in advance

    Read the article

  • The volume "filesystem root" has only 0 bytes disk space remaining?

    - by radek
    I installed 11.10 ~two weeks ago and run into some strange troubles recently. Installation was on brand new laptop with clear 128GB SSD. I opted for encrypting home directory. Apart from that I accepted defaults during the installation. There is no other OS on my laptop. I had circa 40GB in use when (for the third time) I got to see this very unpleasant window: Twice situation was pretty bad and whole system slowed down considerably. After reboot I could not login to graphical interface (with an error message informing about insufficient space) and had to remove some files from command line first. Third time I still managed to quickly delete some files and it helped. My laptop is mainly work environment: so no torrents, games, just two movies. Only media filling space are ~20GB of pictures, and bunch of pdfs. Working mostly on PostgreSQL & PostGIS, GeoServer and QGIS recently. Although I had lots of opportunities to test and practice my backups I would be extremely grateful if somebody could point me to any potential solutions to this problem. My laptop has been bought just before I installed Ubuntu, and it came without OS. Could that be hardware issue? Or is the encrypted home causing me headaches? Thanks for help! Update: As suggested by @maniat1k, here is current output of fdisk -l: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 312581807 156290903+ ee GPT

    Read the article

  • I can't finish installation of 12.10 on Windows 8

    - by Janna Zhou
    When I followed the instruction to install Ubuntu 12.10 with wubi, the come out window ask me to reboot the computer to finish the installation. I followed the instruction and stopped with the message I have left below. Could you please tell me how to finish installation? I'm worried if I follow the instruction below to remove_hiberfile in Windows it would result in a failure to boot my Window 8 system: Completing the Ubuntu installation. For more installation boot options, press 'ESC'now... 0 Busy Box v1.19.3 (Ubuntu 1:1.19.3-7ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) windows is hibernated, refused to mount. Failed to mount '/dev/sda2/: Operation not permitted the NTFS partition is hibernated. Please resume and shutdown Windows properly, or mount the volume read-only with the 'ro' mount option, or mount the volume read-write with the 'remove_hiberfile' mount option. For example type on the command line: mount -t ntsfs-3g -0 remove_hiberfile/dev/sda2/isodevice mount:mounting/dev/sda2 on /isodevice failed: No such device Could not find the ISO/ubuntu/install/installation.iso This could also happen if the file system is not clean because of an operating system crash, an interrupted boot process, an improper shutdown, or unplugging of a removable device without first unmounting or ejecting it. To fix this, simply reboot into windows, let it fully start, log in, run 'chkdsk/r', then gracefully shut down and reboot back into Windows. After this you should be able to reboot again and resume the installation.

    Read the article

  • 550 “Overwrite permission denied” when editing a file via FTP

    - by nodebunny
    DreamHost recently moved my accounts to a new shared box, and now I can't edit files via UltraEdit's built in FTP client, which messes up my work flow! What did they do that this is not working now? It stopped working after they moved me. Here's the output from the FTP console in UltraEdit 10/26/2011 10:42:36 AM: 220 DreamHost FTP Server 10/26/2011 10:42:36 AM: USER nodebunny 10/26/2011 10:42:36 AM: 331 Password required for ninjawww 10/26/2011 10:42:36 AM: PASS xxxxxxxx 10/26/2011 10:42:36 AM: 230 User nodebunny logged in 10/26/2011 10:42:36 AM: FEAT 10/26/2011 10:42:36 AM: 211-Features: LANG ja-JP.UTF-8;ja-JP;zh-TW;fr-FR;zh-CN;en-US*;bg-BG;ko-KR.UTF-8;ko-KR MDTM MFMT TVFS UTF8 MFF modify;UNIX.group;UNIX.mode; MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; REST STREAM SIZE 211 End 10/26/2011 10:42:36 AM: OPTS UTF8 ON 10/26/2011 10:42:36 AM: 200 UTF8 set to on 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/" is the current directory 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/" is the current directory 10/26/2011 10:42:36 AM: CWD /dev/proj/nodebunny 10/26/2011 10:42:36 AM: 250 CWD command successful 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/dev/proj/nodebunny/lib/Buffer" is the current directory 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:37 AM: 257 "/dev/proj/nodebunny/lib/Buffer" is the current directory 10/26/2011 10:42:37 AM: TYPE I 10/26/2011 10:42:37 AM: 200 Type set to I 10/26/2011 10:42:37 AM: PORT 10,15,55,125,226,16 10/26/2011 10:42:37 AM: 200 PORT command successful 10/26/2011 10:42:37 AM: STOR Buffer.pm 10/26/2011 10:42:37 AM: 550 Buffer.pm: Overwrite permission denied

    Read the article

  • Kids don’t mark their own homework

    - by jamiet
    During a discussion at work today in regard to doing some thorough acceptance testing of the system that I currently work on the topic of who should actually do the testing came up. I remarked that I didn’t think that I as the developer should be doing acceptance testing and a colleague, Russ Taylor, agreed with me and then came out with this little pearler: Kids don’t mark their own homework Maybe its a common turn of phrase but I had never heard it before and, to me, it sums up very succinctly my feelings on the matter. I tweeted about it and it got a couple of retweets as well as a slightly different perspective from Bruce Durling who said: I'm of the opinion that testers should be in the dev team & the dev *team* should be responsible for quality Bruce makes a good point that testers should be considered part of the dev team. I agree wholly with that and don’t think that point of view necessarily conflicts with Russ’s analogy. Yes, developers should absolutely be responsible for testing their own work – I also think that in the murky world of data integration there is often a need for a 3rd party to validate that work. Improving testing mechanisms for data integration projects is something that is near and dear to my heart so I would welcome any other thoughts around this. Let me know if you have any in the comments! @Jamiet

    Read the article

  • stress testing opencl/Ati GPU on 12.04

    - by lurscher
    What does people normally use to stress test their GPU on ubuntu 12.04? I tried installing Phoronix Benchmark suite ubuntu .deb but it tries to install freeglu3-dev and at the same time complains about $ phoronix-test-suite benchmark pts/opencl The following dependencies are needed and will be installed: freeglut3-dev This process may take several minutes. Reading package lists... Building dependency tree... Reading state information... freeglut3-dev is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 90 not upgraded. There are dependencies still missing from the system: - OpenGL Utility Kit / GLUT 1: Ignore missing dependencies and proceed with installation. 2: Skip installing the tests with missing dependencies. 3: Re-attempt to install the missing dependencies. 4: Quit the current Phoronix Test Suite process. Missing dependencies action: 4 so even if it is installing freeglu3, it still complains about missing GLUT. You can't win against GLUT it seems So, what does people use for this? i mean, really, because i have tried googling for an hour and it is not paying up Thanks!

    Read the article

  • Thunderbird compact is taking forever

    - by mulllhausen
    One day I came in to work and found that our development server - a Ubuntu box had a full hard disk. I did a bit of investigation using the du command and it seems like mozilla thunderbird is the major culprit. After burning off some backups, the disk was left at 94%: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 895G 791G 59G 94% / none 4.0G 300K 4.0G 1% /dev none 4.0G 1.4M 4.0G 1% /dev/shm none 4.0G 140K 4.0G 1% /var/run none 4.0G 0 4.0G 0% /var/lock none 4.0G 0 4.0G 0% /lib/init/rw $ cd $ du -ch | grep [0-9]G 666G ./.thunderbird/ccsmcruu.default/ImapMail/mail.adofms.com.au 666G ./.thunderbird/ccsmcruu.default/ImapMail 667G ./.thunderbird/ccsmcruu.default 667G ./.thunderbird 2.2G ./.VirtualBox/Machines/iBike/Snapshots 2.2G ./.VirtualBox/Machines/iBike 2.2G ./.VirtualBox/Machines 2.2G ./.VirtualBox 670G . 670G total I did some reading and found that Mozilla Thunderbird does not compact files by default - i.e. all of the old emails that were sent to trash are still kept. One of the mailboxes used to get a lot of spam so I guess this accounts for the 667GB. I opened up Thunderbird to see how much space the inbox actually takes up and it turns out to be approximately 500MB - over 1000 times less than the stuff that has not been compacted over the years. So i right clicked on the inbox directory in the tree on the left of Thunderbird and selected 'compact'. I left it for about 12hours but even after that it still said 'compacting folder' on the status bar. I don't use Thunderbird on this PC - it belonged to a colleague who has left the company, however I do occasionally need to look through the inbox for references to the project I am working on, so deleting all traces of Thunderbird is not an option. My question is - is there any way I can monitor the progress of Thunderbird's compacting function? I would really like to know how long it is going to take. Also is there any way I can speed up the compacting process?

    Read the article

  • How to install mod_ssl for Apache

    - by Nick Foote
    Ok So I installed Apache httpd a while ago and have recently come back to it to try setup SSL and get it serving several different tomcat servers. At the moment I have two completely separate tomcat instances serving up to slightly different versions (one for dev and one for demo say) my web app to two different ports; mydomain.com:8081 and mydomain.com:8082 I've successfully (back in Jan) used mod_jk to get httpd to serve those same tomcat instances to http://www.mydomain.com:8090/dev and http://www.mydomain.com:8090/demo (8090 cos I've got another app running on 8080 via Jetty at this stage) using the following code in httpd.conf; LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log JkLogLevel debug <VirtualHost *:8090> JkMount /devd* tomcatDev JkMount /demo* tomcatDemo </VirtualHost> What I'm not trying to do is enable SSL I've added the following to httpd.conf Listen 443 <VirtualHost _default_:443> JkMount /dev* tomcatDev JkMount /demo* tomcatDemo SSLEngine on SSLCertificateFile "/opt/httpd/conf/localhost.crt" SSLCertificateKeyFile "/opt/httpd/conf/keystore.key" </VirtualHost> But when I try to restart Apache with "apachectl restart" (yes after shutting down that other app I mentioned so it doesn't toy with https connections) I continuously get the error; "Invalid command 'SSLEngine', perhaps misspelled or defined by a module not included in the server configuration. httpd not running, trying to start" I've looked in the httpd/modules dir and indeed there is no mod_ssl, only mod_jk.so and httpd.exp. I've tried using yum to install mod_ssl, it says its already installed. Indeed I can locate mod_ssl.so in /usr/lib/httpd/modules but this is NOT the path to where I've installed httpd which is /opt/httpd and in fact /usr/lib/httpd contains nothing but the modules dir. Can anyone tell me how to install mod_ssl properly for my installed location of httpd so I can get past this error:

    Read the article

  • With WPF and Silverlight against cancer

    - by Laurent Bugnion
    MVPs are well known for their good heart (like the GeekGive initiative shows) and Client App Dev MVP Gregor Biswanger is no exception. At the latest MVP summit (beginning of March 2011), he took over a DVD about WPF 4 and Silverlight 4 and asked a few Microsoft superstars to sign it. Right now, the DVD is auctioned on eBay and of course the proceeds will go to a charitable work: The German League against Cancer (Deutsche Krebshilfe). The post is in German and English (scroll down for the English text). This sounds like a great idea, and considering who signed it, it is going to be a real collectible: Scott Hanselman (Principal Program Manager Lead in Server and Tools Online) Tim Heuer (Program Manager for Microsoft Silverlight) Rob Relyea (Principal Program Manager Lead - Client Platform WPF & Silverlight) Pete Brown (Developer Division Community Program Manager - Windows Client) Eric Fabricant (Program Manager WPF) Jeff Wilcox (Silverlight Senior SDE) Jeffrey R Ferman (SDET Visual Studio Client Dev Tools) Chan Verbeck (Expression Blend Team) Yaniv Feinberg (Expression Blend Team) Douglas Olson (Director Dev Expression) Samuel W. Bent (Principal Software Design Engineer WPF) John Papa (Technical Evangelist for Silverlight) So if you feel that you could do a generous gesture, go ahead and take a look at the auction, and talk about it around you. Let’s prove again that geeks rule, also when it comes to giving to a good cause! Cheers! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • Files are piling up in /usr/src/. How can I stop this?

    - by Bogdanovist
    I have been having many serious system issues over the past few weeks and have been scratching my head as to why. I've now worked out that this problem is having no inodes left on the root partition $ df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda6 732960 724565 8395 99% / udev 125179 518 124661 1% /dev tmpfs 127001 464 126537 1% /run none 127001 4 126997 1% /run/lock none 127001 8 126993 1% /run/shm /dev/sda7 5234688 144639 5090049 3% /home What is the cause? I've found that 400K of those are in use in /usr/src $ ls /usr/src linux-headers-3.2.0-25-generic linux-headers-3.2.0-33 linux-headers-3.2.0-25-generic-pae linux-headers-3.2.0-33-generic linux-headers-3.2.0-26 linux-headers-3.2.0-33-generic-pae linux-headers-3.2.0-26-generic linux-headers-3.2.0-35 linux-headers-3.2.0-26-generic-pae linux-headers-3.2.0-35-generic linux-headers-3.2.0-27 linux-headers-3.2.0-35-generic-pae linux-headers-3.2.0-27-generic linux-headers-3.2.0-36 linux-headers-3.2.0-27-generic-pae linux-headers-3.2.0-36-generic linux-headers-3.2.0-29 linux-headers-3.2.0-36-generic-pae linux-headers-3.2.0-29-generic linux-headers-3.2.0-39 linux-headers-3.2.0-29-generic-pae linux-headers-3.2.0-39-generic linux-headers-3.2.0-30 linux-headers-3.2.0-39-generic-pae linux-headers-3.2.0-30-generic linux-headers-3.2.0-40 linux-headers-3.2.0-30-generic-pae linux-headers-3.2.0-40-generic linux-headers-3.2.0-31 linux-headers-3.2.0-40-generic-pae linux-headers-3.2.0-31-generic linux-headers-3.2.0-41 linux-headers-3.2.0-31-generic-pae linux-headers-3.2.0-41-generic linux-headers-3.2.0-32 linux-headers-3.2.0-41-generic-pae linux-headers-3.2.0-32-generic linux-headers-3.2.0-43 linux-headers-3.2.0-32-generic-pae Surely not all of these are actually needed? I've tried apt-get autoremove but it leaves them all be. I don't want to remove them manually, but this is crippling my machine. They also take up almost 2G of the 11G system partition that is getting full (80%) aside from the inode issue. How can I safely remove the headers that are not needed?

    Read the article

  • unable to format usb 1204 [daemon inhibited ]

    - by santosamaru
    i try to format my usb 1st time its work all data gone but i can't save any file at this usb . then i try to check is it working or broken here the report santos@santos:~$ sudo badblocks -v /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: Checking blocks 0 to 7824383 Checking for bad blocks (read-only test): 0.00% done, 0:00 elapsed. (0/0/0 errdone Pass completed, 0 bad blocks found. (0/0/0 errors) santos@santos:~$ sudo badblocks -v -w /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: /dev/sdb is apparently in use by the system; it's not safe to run badblocks! santos@santos:~$ how to format and fix this issues? i have read this link Formatting Pen Drive causes 'Daemon Is Inhibited' Error and it said like this when i try to move any items from desktop " the destination is read only also in this case i use google and find this http://ubuntuforums.org/showthread.php?t=1955353 article as same its not helped following user13509 suggestion ..

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >