Search Results

Search found 36003 results on 1441 pages for 'try catch'.

Page 57/1441 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Protected Videos not Playing Ubuntu 13.10 (Amazon Prime)

    - by Radeesh Koonichere
    Unable to play amazon prime videos with Chrome/Firefox browser. Tried deleting the Flash folder, re-installed OS. Ubuntu 13.10 Flash Version: flashplugin-installer 11.2.202.310ubuntu1 Youtube works but not Amazon Prime. Try 1 Clear Cache Flash cd ~/.adobe/Flash_Player rm -rf NativeCache AssetCache APSPrivateData2 Try 2 Install Older version of Flash /usr/lib/flashplugin-installer/Flashplayer.so Some other sites have installing HAL and running hald but that was not working either as it seems to be a deprecated. sudo apt-get install hal

    Read the article

  • How can I export emacs documents in 13.04?

    - by Whippy
    When I try to export a document from org-mode in emacs, c-X, c-E now results in 'Can't find library org' whereas in 10.04 it opened the export dialog allowing me to produce html, pdf etc. from the source org file. Searching with Google, I found this bug for redhat which looks closely related. The trouble is I don't know how to get hold of the emacs-el package it talks about in ubuntu in order to try its workaround so I'm currently stuck.

    Read the article

  • Cannot boot from K/Ubuntu install disk on my UEFI system

    - by user93241
    I just got a new system and have been trying to get it set up w/ Win7 & Kubuntu dual-boot, but I've got a major problem. The BIOS of my motherboard (an Asus Crosshair 990FX) is strictly UEFI -- there is no legacy support mode available. I've been reading up on how to get Kubuntu installed in UEFI mode but no matter what I try I cannot seem to even boot into my install CD/USB key properly. I can get as far as the selection screen ("Try Kubuntu", "Install Kubuntu"...) but this screen starts off not appearing correctly. If I try moving the cursor around it sometimes seems to correct itself and show me my choices. But once I select "Try Kubuntu" it starts loading, the screen goes black and then proceeds to flicker -- about once every 5-10 seconds or so. This continues indefinitely. I've tried this with both Kubuntu & Ubuntu installation media, even the AMD64+Mac Ubuntu variety that is supposed to be a lot more flexible w.r.t. UEFI. The only hint I've had that the system might have booted correctly is a little drum sound that plays when booting from the Ubuntu install disk. Well, that and the fact that when I hit my system's power button it seems to shut down correctly, even ejecting the CD at the end. This might be a video driver issue; my system has two nVidia 550's, one of which is attached to my primary monitor. (The secondary isn't hooked up yet.) I'll keep looking over similar questions but any advice would be greatly appreciated. UPDATE: I've tried booting into my 12.04 install CD twice now, each time using two different options supplied by my BIOS. One seemed to offer the ability to boot into my CD under UEFI mode -- this didn't even produce the initial boot menu. The other method offers the ability to boot into my CD NOT under UEFI mode. This DOES produce the boot menu, but after this point it seems I still cannot get to a proper video mode to see what's going on.

    Read the article

  • Why has hibernation stopped working after moving my 12.04 install to a different laptop?

    - by megabytephreak
    I recently got a new laptop (Thinkpad X230). I didn't really have time to do a fresh install so I copied my install (xubuntu 12.04) over from my old machine (Thinkpad X61). Now when I try to hibernate it seems to do the Hibernate step fine, but doesn't seem to try and resume when booting back up. It just does a fresh boot. Is there something I need to do so the system knows where to look for the hibernation data?

    Read the article

  • Using the Same Domain to Bury Bad Publicity

    Receiving bad publicity can be a devastating blow to a brand's online reputation, and in order to mitigate the damage often the best course of action is to try to create enough alternate content to push the negative publicity down to the second, third, or even deeper, search result pages. Most people do this by creating a number of different pages on new or alternate domains, but in fact it can be much more effective to try to create pages on the same domain.

    Read the article

  • setting the position in different resolution

    - by Moaz
    I have a normal game window which is 640*480, and everything is fine, but when I try to maximize the window, the objects translate to different positions on the screen, for example If I have a circle which is drawn at the center in the normal window, when I try to maximize it, it shifts away from the center of the screen. How do I adjust it so it draws at the center in both normal window and maximized window ?

    Read the article

  • Ubuntu 12.10 being unable to mount Windows drives

    - by kousik
    Ubuntu 12.10 doesn't show the one of my windows drives. When I try to mount the drive, ubuntu shows: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read $MFTMirr: Input/output error Failed to mount '/dev/sda6': Input/output error NTFS is inconsistent. One more problem is also there: When I try to install Windows 7 with live dvd as well as USB, the installation freezes at the screen 'Setup is Starting...'. Please help me in this issue because there are various important files in that drive.....

    Read the article

  • Can't Install Packages ubuntu 12.04

    - by Pumpkin
    I'm trying to install software by using apt-get and ubuntu software center UI, when I try the ubuntu software center it gets stuck saying installing and installs nothing and when I try the apt-get it gets stuck at the point "0% [Waiting for headers]" I tried modifying sources.list file. I installed ubuntu this morning. Do you have any suggestions for me ? Thanks in advance for your time. edit : After a while, I get the error 502 bad gateway

    Read the article

  • GUI stops responding after a few seconds

    - by Nucklear
    I'm dual-booting Windows 7 and Ubuntu 12.04. When I boot Ubuntu, the interface doesn't respond to my requests. I can launch applications from the unity launcher, but when I try to close them it doesn't respond; even when I try to maximize it does nothing. I tried restarting the GUI, but after a few seconds it happens again. I had the same issue with older versions of Ubuntu and never figured it out.

    Read the article

  • Using nmap to scan for SQL Servers on a network

    I need to try and find all SQL Servers, not just the ones in my domain. We know there are a couple of appliances that are potentially running SQL Server and we want to see them, too. What can I use to do this? Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • how do I run CafeOBJ on my Ubuntu OS?

    - by Imogen
    I am attempting to run CafeOBJ on my ubuntu 12.04 OS. It wont open, when I try to run this on my other windows 7 OS i get a LIBEAY error message. What do I do to run this program that used to run fine on the windows OS. I have installed correctly and all files are present, but the cafeOBJ application is not considered an application by the laptop. In summation I am having LOADS of fun screaming at my laptop to try to make it understand. Please some one help!!!

    Read the article

  • Visual Studio Exceptions dialogs

    - by Daniel Moth
    Previously I covered step 1 of live debugging with start and attach. Once the debugger is attached, you want to go to step 2 of live debugging, which is to break. One way to break under the debugger is to do nothing, and just wait for an exception to occur in your code. This is true for all types of code that you debug in Visual Studio, and let's consider the following piece of C# code:3: static void Main() 4: { 5: try 6: { 7: int i = 0; 8: int r = 5 / i; 9: } 10: catch (System.DivideByZeroException) {/*gulp. sue me.*/} 11: System.Console.ReadLine(); 12: } If you run this under the debugger do you expect an exception on line 8? It is a trick question: you have to know whether I have configured the debugger to break when exceptions are thrown (first-chance exceptions) or only when they are unhandled. The place you do that is in the Exceptions dialog which is accessible from the Debug->Exceptions menu and on my installation looks like this: Note that I have checked all CLR exceptions. I could have expanded (like shown for the C++ case in my screenshot) and selected specific exceptions. To read more about this dialog, please read the corresponding Exception Handling debugging msdn topic and all its subtopics. So, for the code above, the debugger will break execution due to the thrown exception (exactly as if the try..catch was not there), so I see the following Exception Thrown dialog: Note the following: I can hit continue (or hit break and then later continue) and the program will continue fine since I have a catch handler. If this was an unhandled exception, then that is what the dialog would say (instead of first chance exception) and continuing would crash the app. That hyperlinked text ("Open Exception Settings") opens the Exceptions dialog I described further up. The coolest thing to note is the checkbox - this is new in this latest release of Visual Studio: it is a shortcut to the checkbox in the Exceptions dialog, so you don't have to open it to change this setting for this specific exception - you can toggle that option right from this dialog. Finally, if you try the code above on your system, you may observe a couple of differences from my screenshots. The first is that you may have an additional column of checkboxes in the Exceptions dialog. The second is that the last dialog I shared may look different to you. It all depends on the Debug->Options settings, and the two relevant settings are in this screenshot: The Exception assistant is what configures the look of the UI when the debugger wants to indicate exception to you, and the Just My Code setting controls the extra column in the Exception dialog. You can read more about those options on MSDN: How to break on User-Unhandled exceptions (plus Gregg’s post) and Exception Assistant. Before I leave you to go play with this stuff a bit more, please note that this level of debugging is now available for JavaScript too, and if you are looking at the Exceptions dialog and wondering what the "GPU Memory Access Exceptions" node is about, stay tuned on the C++ AMP blog ;-) Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • JMSContext, @JMSDestinationDefintion, DefaultJMSConnectionFactory with simplified JMS API: TOTD #213

    - by arungupta
    "What's New in JMS 2.0" Part 1 and Part 2 provide comprehensive introduction to new messaging features introduced in JMS 2.0. The biggest improvement in JMS 2.0 is introduction of the "new simplified API". This was explained in the Java EE 7 Launch Technical Keynote. You can watch a complete replay here. Sending and Receiving a JMS message using JMS 1.1 requires lot of boilerplate code, primarily because the API was designed 10+ years ago. Here is a code that shows how to send a message using JMS 1.1 API: @Statelesspublic class ClassicMessageSender { @Resource(lookup = "java:comp/DefaultJMSConnectionFactory") ConnectionFactory connectionFactory; @Resource(mappedName = "java:global/jms/myQueue") Queue demoQueue; public void sendMessage(String payload) { Connection connection = null; try { connection = connectionFactory.createConnection(); connection.start(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); MessageProducer messageProducer = session.createProducer(demoQueue); TextMessage textMessage = session.createTextMessage(payload); messageProducer.send(textMessage); } catch (JMSException ex) { ex.printStackTrace(); } finally { if (connection != null) { try { connection.close(); } catch (JMSException ex) { ex.printStackTrace(); } } } }} There are several issues with this code: A JMS ConnectionFactory needs to be created in a application server-specific way before this application can run. Application-specific destination needs to be created in an application server-specific way before this application can run. Several intermediate objects need to be created to honor the JMS 1.1 API, e.g. ConnectionFactory -> Connection -> Session -> MessageProducer -> TextMessage. Everything is a checked exception and so try/catch block must be specified. Connection need to be explicitly started and closed, and that bloats even the finally block. The new JMS 2.0 simplified API code looks like: @Statelesspublic class SimplifiedMessageSender { @Inject JMSContext context; @Resource(mappedName="java:global/jms/myQueue") Queue myQueue; public void sendMessage(String message) { context.createProducer().send(myQueue, message); }} The code is significantly improved from the previous version in the following ways: The JMSContext interface combines in a single object the functionality of both the Connection and the Session in the earlier JMS APIs.  You can obtain a JMSContext object by simply injecting it with the @Inject annotation.  No need to explicitly specify a ConnectionFactory. A default ConnectionFactory under the JNDI name of java:comp/DefaultJMSConnectionFactory is used if no explicit ConnectionFactory is specified. The destination can be easily created using newly introduced @JMSDestinationDefinition as: @JMSDestinationDefinition(name = "java:global/jms/myQueue",        interfaceName = "javax.jms.Queue") It can be specified on any Java EE component and the destination is created during deployment. JMSContext, Session, Connection, JMSProducer and JMSConsumer objects are now AutoCloseable. This means that these resources are automatically closed when they go out of scope. This also obviates the need to explicitly start the connection JMSException is now a runtime exception. Method chaining on JMSProducers allows to use builder patterns. No need to create separate Message object, you can specify the message body as an argument to the send() method instead. Want to try this code ? Download source code! Download Java EE 7 SDK and install. Start GlassFish: bin/asadmin start-domain Build the WAR (in the unzipped source code directory): mvn package Deploy the WAR: bin/asadmin deploy <source-code>/jms/target/jms-1.0-SNAPSHOT.war And access the application at http://localhost:8080/jms-1.0-SNAPSHOT/index.jsp to send and receive a message using classic and simplified API. A replay of JMS 2.0 session from Java EE 7 Launch Webinar provides complete details on what's new in this specification: Enjoy!

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • Criminals and Other Illegal Characters

    - by Most Valuable Yak (Rob Volk)
    SQLTeam's favorite Slovenian blogger Mladen (b | t) had an interesting question on Twitter: http://www.twitter.com/MladenPrajdic/status/347057950470307841 I liked Kendal Van Dyke's (b | t) reply: http://twitter.com/SQLDBA/status/347058908801667072 And he was right!  This is one of those pretty-useless-but-sounds-interesting propositions that I've based all my presentations on, and most of my blog posts. If you read all the replies you'll see a lot of good suggestions.  I particularly like Aaron Bertrand's (b | t) idea of going into the Unicode character set, since there are over 65,000 characters available.  But how to find an illegal character?  Detective work? I'm working on the premise that if SQL Server will reject it as a name it would throw an error.  So all we have to do is generate all Unicode characters, rename a database with that character, and catch any errors. It turns out that dynamic SQL can lend a hand here: IF DB_ID(N'a') IS NULL CREATE DATABASE [a]; DECLARE @c INT=1, @sql NVARCHAR(MAX)=N'', @err NVARCHAR(MAX)=N''; WHILE @c<65536 BEGIN BEGIN TRY SET @sql=N'alter database ' + QUOTENAME(CASE WHEN @c=1 THEN N'a' ELSE NCHAR(@c-1) END) + N' modify name=' + QUOTENAME(NCHAR(@c)); RAISERROR(N'*** Trying %d',10,1,@c) WITH NOWAIT; EXEC(@sql); SET @c+=1; END TRY BEGIN CATCH SET @err=ERROR_MESSAGE(); RAISERROR(N'Ooops - %d - %s',10,1,@c,@err) WITH NOWAIT; BREAK; END CATCH END SET @sql=N'alter database ' + QUOTENAME(NCHAR(@c-1)) + N' modify name=[a]'; EXEC(@sql); The script creates a dummy database "a" if it doesn't already exist, and only tests single characters as a database name.  If you have databases with single character names then you shouldn't run this on that server. It takes a few minutes to run, but if you do you'll see that no errors are thrown for any of the characters.  It seems that SQL Server will accept any character, no matter where they're from.  (Well, there's one, but I won't tell you which. Actually there's 2, but one of them requires some deep existential thinking.) The output is also interesting, as quite a few codes do some weird things there.  I'm pretty sure it's due to the font used in SSMS for the messages output window, not all characters are available.  If you run it using the SQLCMD utility, and use the -o switch to output to a file, and -u for Unicode output, you can open the file in Notepad or another text editor and see the whole thing. I'm not sure what character I'd recommend to answer Mladen's question.  I think the standard tab (ASCII 9) is fine.  There's also several specific separator characters in the original ASCII character set (decimal 28-31). But of all the choices available in Unicode whitespace, I think my favorite would be the Mongolian Vowel Separator.  Or maybe the zero-width space. (that'll be fun to print!)  And since this is Mladen we're talking about, here's a good selection of "intriguing" characters he could use.

    Read the article

  • TFS 2012 API Create Alert Subscriptions

    - by Bob Hardister
    Originally posted on: http://geekswithblogs.net/BobHardister/archive/2013/07/24/tfs-2012-api-create-alert-subscriptions.aspxThere were only a few post on this and I felt like really important information was left out: What the defaults are How to create the filter string Here’s the code to create the subscription. Get the Collection public TfsTeamProjectCollection GetCollection(string collectionUrl) { try { //connect to the TFS collection using the active user TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri(collectionUrl)); tpc.EnsureAuthenticated(); return tpc; } catch (Exception) { return null; } } Use Impersonation Because my app is used to create “support tickets” as stories in TFS, I use impersonation so the subscription is setup for the “requester.”  That way I can take all the defaults for the subscription delivery preferences. public TfsTeamProjectCollection GetCollectionImpersonation(string collectionUrl, string impersonatingUserAccount) { // see: http://blogs.msdn.com/b/taylaf/archive/2009/12/04/introducing-tfs-impersonation.aspx try { TfsTeamProjectCollection tpc = GetCollection(collectionUrl); if (!(tpc == null)) { //get the TFS identity management service (v2 is 2012 only) IIdentityManagementService2 ims = tpc.GetService<IIdentityManagementService2>(); //look up the user we want to impersonate TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, impersonatingUserAccount, MembershipQuery.None, ReadIdentityOptions.None); //create a new connection using the impersonated user account //note: do not ensure authentication because the impersonated user may not have //windows authentication at execution if (!(identity == null)) { TfsTeamProjectCollection itpc = new TfsTeamProjectCollection(tpc.Uri, identity.Descriptor); return itpc; } else { //the user account is not found return null; } } else { return null; } } catch (Exception) { return null; } } Create the Alert Subscription public bool SetWiAlert(string collectionUrl, string projectName, int wiId, string emailAddress, string userAccount) { bool setSuccessful = false; try { //use impersonation so the event service creating the subscription will default to //the correct account: otherwise domain ambiguity could be a problem TfsTeamProjectCollection itpc = GetCollectionImpersonation(collectionUrl, userAccount); if (!(itpc == null)) { IEventService es = itpc.GetService(typeof(IEventService)) as IEventService; DeliveryPreference deliveryPreference = new DeliveryPreference(); //deliveryPreference.Address = emailAddress; deliveryPreference.Schedule = DeliverySchedule.Immediate; deliveryPreference.Type = DeliveryType.EmailHtml; //the following line does not work for two reasons: //string filter = string.Format("\"ID\" = '{0}' AND \"Authorized As\" <> '[Me]'", wiId); //1. the create fails because there is a space between Authorized As //2. the explicit query criteria are all incorrect anyway // see uncommented line for what does work: you have to create the subscription mannually // and then get it to view what the filter string needs to be (see following commented code) //this works string filter = string.Format("\"CoreFields/IntegerFields/Field[Name='ID']/NewValue\" = '12175'" + " AND \"CoreFields/StringFields/Field[Name='Authorized As']/NewValue\"" + " <> '@@MyDisplayName@@'", projectName, wiId); string eventName = string.Format("<PT N=\"ALM Ticket for Work Item {0}\"/>", wiId); es.SubscribeEvent("WorkItemChangedEvent", filter, deliveryPreference, eventName); ////use this code to get existing subscriptions: you can look at manually created ////subscriptions to see what the filter string needs to be //IIdentityManagementService2 ims = itpc.GetService<IIdentityManagementService2>(); //TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, // userAccount, // MembershipQuery.None, // ReadIdentityOptions.None); //var existingsubscriptions = es.GetEventSubscriptions(identity.Descriptor); setSuccessful = true; return setSuccessful; } else { return setSuccessful; } } catch (Exception) { return setSuccessful; } }

    Read the article

  • Ipsec reload fails to load ipsec.conf Strongswan 5.0

    - by Quentin Swain
    I am having trouble configuring a connection to an Android device using a fedora 17 linux machine and strongSwanv5.0.1dr2. I have made some progress but when I try adding the configuration to support xauth authentication I receive an error when I try to reload the configuration file. I get a similar error for the value ikev1 for the keyexchange setting , and whenever i try to set a value for rightauth. Has anyone else had this problem The man page for ipsec.conf and the documentation on the strongswan wiki both indicated that these settings and values should be fine in 5.0.x.x. I could try setting authby but that is deprecated according to the documentation i read and the xauthpsk value isn't working. Any help is much appreciated thanks. can not load config '/etc/ipsec.conf': /etc/ipsec.conf:25: syntax error, unexpected STRING [leftauth] # /etc/ipsec.conf - Openswan IPsec configuration file # # Manual: ipsec.conf.5 # # Please place your own config files in /etc/ipsec.d/ ending in .conf version 2.0 # conforms to second version of ipsec.conf specification # basic configuration config setup # For Red Hat Enterprise Linux and Fedora, leave protostack=netkey protostack=netkey # Enable this if you see "failed to find any available worker" # nhelpers=0 plutodebug=all conn %default ikelifetime=240m #keylifetime=20m keyingtries=3 ikev2=no conn android left=10.1.12.212 right=10.1.12.140 leftxauthserver=yes leftauth=psk rightauth=xauth keyexchange=ikev1 type=tunnel pfs=no rekey=no auto=start ike=aes256-md5;modp1024 phase2=esp ikev2=no #You may put your configuration (.conf) file in the "/etc/ipsec.d/" #include /etc/ipsec.d/*.conf

    Read the article

  • BSOD trying to migrate Windows XP from a physical to a virtual machine

    - by pauldoo
    I am attempting to migrate a Windows XP Home installation from a physical machine to a virtual machine. The physical machine has two hard disks; the first is 250GB containing the "C:", the second is 1TB containing "D:". I'd like to create a new virtual machine stored on the D:, which is a copy of the Windows XP Home installation that is currently on the C:. (This will leave the 250GB drive clear for me to install a fresh copy of Windows 7, and still be able to access the old XP installation if necessary.) The first method I tried was to follow the instructions here: http://www.virtualbox.org/wiki/Migrate_Windows I booted up from an Ubuntu Live CD in order to execute the Linux commands whilst the Windows system wasn't running. With this method the virtual machine would always blue screen on startup with a "STOP 0x0000007B" message. The instructions above say to try a "repair install" using the Windows XP disc. Unfortunately for me my XP disc is scratched and will not boot so I was unable to try a repair install. The second method I tried was to use "VMWare Converter Standalone Client". This tool executed without any errors, but again produced a virtual machine that blue screens on startup with the same "STOP" message. Are there any other methods to move the Windows XP installation into a virtual machine? I think next I will try some more manual process to create the cloned virtual machine. I think I will try installing a fresh copy of Windows XP to a virtual machine, then once that is booting OK I will ntfsclone the source C: partition over the top. Perhaps this will fix the booting problems if the issue is related to the MBR or partition table in some way.

    Read the article

  • sorry, the maximum allowed clients from your host (10) are already connected" FTP error

    - by Sejanus
    Hello, I keep getting the "sorry, the maximum allowed clients from your host (10) are already connected" error whenever I try to transfer a large number of files. At first I thought it's a filezilla bug, however I get the same error basically with every FTP client I've tried, including Total Commander under Wine. I do not get that error using Windows. I did try to limit maximum allowed connections for Filezilla, both in server settings and in global settings, it didnt change anything. I did try to switch between passive and active modes (not sure if it's related at all, just last desperate attempt), and it didnt change anything either. When I try to use native ftp client (not sure how is it called, the one in Places - Connect to a server) I get abstract "connection refused" error every time I transfer large number of files. Connection is refused for separate particular files, if I click "Ignore" each time the rest of files are transfered perfectly well, so I assume it's the very same error. Anything I could do? This really drives me mad, transfering large numbers of files is a part of my everyday job... P.S. and this happens with many different FTP servers. Also I dont get this error in Windows. So I assume it's not a server problem. P.P.S. I am aware of similar question here, the answer provided just didn't solve it to me.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • scanner no longer working windows 7

    - by Sackling
    I have a windows 7 64 pro machine that for some reason no longer works with a brother all in one printer/scanner, when I try to scan. This happened seemingly out of nowhere. Printing still works. When I try to scan from photoshop, photoshop crashes. When I try to scan from device manager (which takes a really long time to access the printer) I get an error saying: "operation could not be completed. (error 0x00000015). the device is not ready. I tried uninstalling/reinstalling the printer and drivers. I have also run the brother driver cleaner and then reinstalled and get the same results every time. I had access to another HP all in one. And very strangely When I setup this new printer I have a similar issue that happens. I am able to print but when I try to scan from the HP tool nothing pops up. So my thoughts are it is something to do with the twain driver? But I don't know what to do or check. Any help is appreciated.

    Read the article

  • samba sync password with unix password on debian wheezy

    - by Oz123
    I installed samba on my server and I am trying to write a script to spare me the two steps to add user, e.g.: adduser username smbpasswd -a username My smb.conf states: # This boolean parameter controls whether Samba attempts to sync the Unix # password with the SMB password when the encrypted SMB password in the # passdb is changed. unix password sync = yes Further reading brought me to pdbedit man page which states: -a This option is used to add a user into the database. This com- mand needs a user name specified with the -u switch. When adding a new user, pdbedit will also ask for the password to be used. Example: pdbedit -a -u sorce new password: retype new password Note pdbedit does not call the unix password syncronisation script if unix password sync has been set. It only updates the data in the Samba user database. If you wish to add a user and synchronise the password that im- mediately, use smbpasswd’s -a option. So... now I decided to try adding a user with smbpasswd: 1st try, unix user still does not exist: root@raspberrypi:/home/pi# smbpasswd -a newuser New SMB password: Retype new SMB password: Failed to add entry for user newuser. 2nd try, unix user exists: root@raspberrypi:/home/pi# useradd mag root@raspberrypi:/home/pi# smbpasswd -a mag New SMB password: Retype new SMB password: Added user mag. # switch to user pi, and try to switch to mag root@raspberrypi:/home/pi# su pi pi@raspberrypi ~ $ su mag Password: su: Authentication failure So, now I am asking myself: how do I make samba passwords sync with unix passwords? where are samba passwords stored? Can someone help enlighten me?

    Read the article

  • VNC authentication failure

    - by cf16
    I try to connect to my vncserver running on CentOs from home computer, behind firewall. I have installed Win7 and Ubuntu both on this machine. I have an error: VNC conenction failed: vncserver too many security failures even when loging with right credentials (I reset passwd on CentOs) I get: authentication failure. I observe that I have to wait a whole day to be able to relogin at all. Is it something regarding that I try as root? I think important is also that I have to login to remote Centos through port 6050 - none else port works for me. Do I have to do something with other ports? I see that vncserver is listening on 5901, 5902 if another added - and I consider connection is established because from time to time (long time) the passwd prompt appears,... right? I have created additional user1, password for him to CentOS and to VNC, also user2. I do: service vncserver start and two servers starts, one :1, and second on :2. When I try to connect to vncserverIP:1 I get what described above, but when I try connect to vncserverIP:2 it says that the trial was unsuccessful. please help, what to do? additionally: how to disable this lockout for a testing purposes?

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Wireless disconnects at random after upgrade to Ubuntu 10.4

    - by Daniel Elessedil Kjeserud
    After upgrading my home server from Ubuntu 8.10 to 10.4 my wireless seemingly drops out, even though my IRC client keeps it's connection to the servers, so it looks like the machine just stops taking wireless requests. A ping will give a me this Request timeout for icmp_seq 27 ping: sendto: Host is down After a while the machine just starts responding again, without any interaction from me. When the machine comes back, this is what dmesg gives me [ 18.296288] wlan0: direct probe to AP 00:1b:63:22:a4:5f (try 1) [ 18.296350] wlan0: deauthenticating from 00:1b:63:22:a4:5f by local choice (reason=3) [ 18.296440] wlan0: direct probe to AP 00:1b:63:22:a4:5f (try 1) [ 18.298697] wlan0: direct probe responded [ 18.298706] wlan0: authenticate with AP 00:1b:63:22:a4:5f (try 1) [ 18.306836] wlan0: authenticated [ 18.306886] wlan0: associate with AP 00:1b:63:22:a4:5f (try 1) [ 18.309396] wlan0: RX AssocResp from 00:1b:63:22:a4:5f (capab=0x411 status=0 aid=2) [ 18.309402] wlan0: associated [ 18.310187] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 18.447742] apm: BIOS version 1.2 Flags 0x03 (Driver version 1.16ac) [ 18.447748] apm: overridden by ACPI. [ 19.163282] padlock: VIA PadLock not detected. [ 28.352022] wlan0: no IPv6 routers present kjes@brin:~$ lspci 02:07.0 Network controller: RaLink RT2561/RT61 rev B 802.11g It's on a wireless network with WPA2, the machine worked without any problems on the same wireless network since Ubuntu 8.10 was the most resent version, and there have been no changes to my network recently. Even though the server drops out, everything else on the network keeps working like normal.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >