Search Results

Search found 37931 results on 1518 pages for 'computer case'.

Page 303/1518 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Scheme vs Haskell for an Introduction to Functional Programming?

    - by haziz
    I am comfortable with programming in C and C#, and will explore C++ in the future. I may be interested in exploring functional programming as a different programming paradigm. I am doing this for fun, my job does not involve computer programming, and am somewhat inspired by the use of functional programming, taught fairly early, in computer science courses in college. Lambda calculus is certainly beyond my mathematical abilities, but I think I can handle functional programming. Which of Haskell or Scheme would serve as a good intro to functional programming? I use emacs as my text editor and would like to be able to configure it more easily in the future which would entail learning Emacs Lisp. My understanding, however, is that Emacs Lisp is fairly different from Scheme and is also more procedural as opposed to functional. I would likely be using "The Little Schemer" book, which I have already bought, if I pursue Scheme (seems to me a little weird from my limited leafing through it). Or would use the "Learn You a Haskell for Great Good" if I pursue Haskell. I would also watch the Intro to Haskell videos by Dr Erik Meijer on Channel 9. Any suggestions, feedback or input appreciated. Thanks. P.S. BTW I also have access to F# since I have Visual Studio 2010 which I use for C# development, but I don't think that should be my main criteria for selecting a language.

    Read the article

  • Problems with Ubuntu and AMD A10-4655M APU

    - by Robert Hanks
    I have a new HP Sleekbook 6z with AMD A10-4655M APU. I tried installing Ubuntu with wubi--the first attempt ended up with a 'AMD unsupported hardware' watermark that I wasn't able to remove (the appeared when I tried to update the drivers as Ubuntu suggested) On the second attempted install Ubuntu installed (I stayed away from the suggested drivers) but the performance was extremely poor----as in Windows Vista poor. I am not sure what the solution is--if I need to wait until there is a kernel update with Ubuntu or if there are other solutions--I realise this is a new APU for the market. I would love to have Ubuntu 12.04 up and running--Windows 7 does very well with this new processor so Ubuntu should, well, be lightening speed. The trial on the Sleekbook with Ubuntu 12.10 Alpha 2 release was a complete failure. I created a bootable USB. By using either the 'Try Ubuntu' or 'Install Ubuntu' options resulted in the usual purple Ubuntu splash screen, followed by nothing...as in a black screen without any hint of life. Interestingly one can hear the Ubuntu intro sound. In case you are wondering, this same USB was trialed subsequently on another computer with and Intel Atom Processor. Worked flawlessly. Lastly the second trial on the Sleekbook resulted in the same results as the first paragraph. Perhaps 12.10 Beta will overcome this issue, or the finalised 12.10 release in October. I don't have the expertise to know what the cause of the behaviour is-the issue could be something else entirely. Sadly, the Windows 7 performance is very good with this processor-very similar and in some instances better to the 2nd generation Intel i5 based computer I use at my workplace. Whatever the cause is for the performance with Ubuntu 12.04 or 12.10 Alpha 2, the situation doesn't bode well for Ubuntu. Ubuntu aside, the HP Sleekbook is a good performer for the price. I am certain once the Ubuntu issue is worked on and solutions arise, the Ubuntu performance will probably be better than ever.

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Risk to native OS from Live CD?

    - by Frost Shadow
    Will booting from a Live CD (I was thinking Anonym OS) have any risk to the native OS? I wanted to try it out on my school´s computer, but I´d rather not have to explain why I accidentally reformatted the HD and deleted everything.. I know once you´ve booted the right way, it shouldn´t leave any trace on the HD, but is it possible I can push some wrong button and end up trying to overwrite the native with the Live OS? Also, since the computer itself is connected to the internet, will the network administrator be able to see that i´ve booted from a Live CD? I´m thinking yes, but just thought I´d check. Thanks for any help!

    Read the article

  • How to maximise performance in computers connected into LAN via Gigabit ethernet router?

    - by penyuan
    Our group is setting up a server (which might just be a NAS, but we're not sure yet), which shares files, so that it connects to all other computers in the room (about 10 of them). I am thinking just hooking all of them up via a gigabit router/switch. Is there anything I should watch out for, in terms of cables, connections, or the connection capabilities of each computer in the network? For instance, I don't want a slow computer in the LAN to slow down everyone else's connection, etc., etc. Thanks for the education.

    Read the article

  • MacBook Refuses to Boot

    - by pas09
    I have a MacBook that has unfortunately died on me. I randomly got a pop up message that said I needed to restart my computer, and once I did, I was greeted with the blinking folder question mark start up screen. I've tried everything, including running Disk Utility repair and reformatting the hard drive and reinstalling OS X. All of my data is backed up and safe, I just need my computer to start again. Before I go off and buy a new hard drive, I wanted to see if there is anything I might have forgotten.

    Read the article

  • Acceptable placement of the composition root using dependency injection and inversion of control containers

    - by Lumirris
    I've read in several sources including Mark Seemann's 'Ploeh' blog about how the appropriate placement of the composition root of an IoC container is as close as possible to the entry point of an application. In the .NET world, these applications seem to be commonly thought of as Web projects, WPF projects, console applications, things with a typical UI (read: not library projects). Is it really going against this sage advice to place the composition root at the entry point of a library project, when it represents the logical entry point of a group of library projects, and the client of a project group such as this is someone else's work, whose author can't or won't add the composition root to their project (a UI project or yet another library project, even)? I'm familiar with Ninject as an IoC container implementation, but I imagine many others work the same way in that they can scan for a module containing all the necessary binding configurations. This means I could put a binding module in its own library project to compile with my main library project's output, and if the client wanted to change the configuration (an unlikely scenario in my case), they could drop in a replacement dll to replace the library with the binding module. This seems to avoid the most common clients having to deal with dependency injection and composition roots at all, and would make for the cleanest API for the library project group. Yet this seems to fly in the face of conventional wisdom on the issue. Is it just that most of the advice out there makes the assumption that the developer has some coordination with the development of the UI project(s) as well, rather than my case, in which I'm just developing libraries for others to use?

    Read the article

  • How can I set my resolution to 1280x1024 on an Acer Aspire Revo 3700?

    - by torbengb
    I've just set up a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. Almost everything works OK, but the graphics are not OK: the recommended Nvidia driver is activated but the monitor is not detected, so the resolution is wrong. How can I make Ubuntu detect my monitor? How can I get the proper resolution (1280x1024) in Ubuntu? I know that my monitor is not a CRT but an LCD: it's a BenQ, model T905, with 1280x1024 resolution at 60Hz, connected via a normal VGA cable. DVI or HDMI is not an option. When I go to SystemPrefsMonitors, I get: It appears that your graphics driver does not support the necessary extensions to use this tool. Do you want to use your graphics driver vendor's tool instead? YES NO If I say NO then I get a window: or for YES I get this: In both cases I don't see that I can fix this problem. The main reason for getting this new computer was that I was sick of having graphics problems on the old one with a very ugly solution that didn't give me hardware support - but at least I got the resultion. Why is this so difficult... sigh!

    Read the article

  • Generalise variable usage inside code

    - by Shirish11
    I would like to know if it is a good practice to generalize variables (use single variable to store all the values). Consider simple example Strings querycre,queryins,queryup,querydel; querycre = 'Create table XYZ ...'; execute querycre ; queryins = 'Insert into XYZ ...'; execute queryins ; queryup = 'Update XYZ set ...'; execute queryup; querydel = 'Delete from XYZ ...'; execute querydel ; and Strings query; query= 'Create table XYZ ... '; execute query ; query= 'Insert into XYZ ...'; execute query ; query= 'Update XYZ set ...'; execute query ; query= 'Delete from XYZ ...'; execute query ; In first case I use 4 strings each storing data to perform the actions mentioned in their suffixes. In second case just 1 variable to store all kinds the data. Having different variables makes it easier for someone else to read and understand it better. But having too many of them makes it difficult to manage. Also does having too many variables hamper my performance?

    Read the article

  • Sharing internet connection from Windows XP using wi-fi router

    - by Darius
    Hi, I have an network configuration like: Ethernet cable from ISP connected to Windows XP machine, configured with static IP 192.168.0.3 Another ethernet connection from 2nd Windows XP machine's network adapter to a Wi-Fi router (D-Link Airport G+) XP set to "Share internet connection", the 2nd adapter configured as static to 192.169.0.1 D-Link Airport Wi-Fi router also configured as "static connection", it's IP set to 192.169.0.2, default gateway set to 192.169.0.1. Network mask everywhere is 24. Laptop computer connected with the router with static IP 192.169.0.3 The problems are: XP machine sees the router (it's able to ping it and access it via the web admin tool) The router somehow cannot PING the XP machine (using the tool provided by the web-based admin tool) The laptop computer cannot ping anything and cannot be pinged The router is only accessible when the ethernet cable is connected with a router's 1-4 LAN port, when I connect it via "WAN" port (which I believe is the proper one) it's not visible from the XP machine If you have similar experience with configuring a network like this I would really appreciate your help. I cannot use the Wi-Fi router with the ISP cable itself.

    Read the article

  • Algorithm to infer tag hierarchy

    - by Tom
    I'm looking for an algorithm to infer a hierarchy from a set of tagged items. E.g. if the following items have the tags: 1 a 2 a,b 3 a,c 4 a,c,e 5 a,b 6 a,c 7 d 8 d,f Then I can construct an undirected graph (or graphs) by tallying the node weights and edge weights: node weights edge weights a 6 a-b 2 b 2 a-c 3 c 3 c-e 1 d 2 a-e 1 <-- this edge is parallel to a-c and c-e and not wanted e 1 d-f 1 f 1 The first problem is how to drop any redundant edges to get to the simplified graph? Note that it's only appropriate to remove that redundant a-e edge in this case because something is tagged as a-c-e, if that wasn't the case and the tag was a-e, that edge would have to remain. I suspect that means the removal of edges can only happen during the construction of the graph, not after everything has been tallied up. What I'd then like to do is identify the direction of the edges to create a directed graph (or graphs) and pick out root nodes to hopefully create a tree (or trees): trees a d // \\ | b c f \ e It seems like it could be a string algorithm - longest common subsequences/prefixes - or a tree/graph algorithm, but I am a little stuck since I don't know the correct terminology to search for it.

    Read the article

  • How are SQL Server CALs counted?

    - by Sam
    Running a SQL Server, as far as I understand it, you need one CAL for every user who connects to the database server. But what happens if the only computer which is accessing the SQL Server is the server running your business layer? If, for example, you got 1 SQL Server and 1 Business logic server, and 100 Clients who all just query and use the business logic server. No client is using the SQL Server directly, no one is even allowed to contact it. So, since there is only one computer using the SQL server, do I need only 1 CAL??? I somehow can't believe this would count as only 1 CAL needed for the SQL Server, but I would like to know why not.

    Read the article

  • Can't bring NAT to work

    - by user31738
    Hello, I bought a D-link DIR-300 wireless router and i can't bring NAT to work, i have an ssh and http service i need to forward to the internet. My connection is as follows: I have an ADSL connection, i'm using a ADSL ethernet modem connected and working, it doesnt let me put it on bridge mode. I have my router connected to my adsl modem through ethernet, it gets its ip through DHCP (and i'ts always the same) I have a desktop computer running linux with apache and openssh configured and working, it has fixed ip. I configured the NAT in the modem forwarding port 22 from the router ip to the internet. In the router i setup NAT forwarding port 22 from the desktop computer fixed ip to out there. This setup already worked with a fonera i had before, can anyone help me with this or tell me what kind of tests do i need to do? How can i test if the router is forwarding ports correctly before the modem?

    Read the article

  • How to retain secondary hard drive mounts at reboot and keep shares?

    - by Tom
    I'm running Ubuntu 12.04. A second hard drive connected to this computer does not mount when the computer boots. Additionally, I have set up the drive to be shared but the share is not retained, the share is lost after each boot. My main system drive and a removable drive mount OK and shares remain between boots. Additional information follows: D2Linux sda1 is the secondary hard drive L-Freeagent sdc1 is the removeable drive Here is the contents of fstab immediately after booting (D2Linux /dev/sda1 not yet mounted): '# /etc/fstab: static file system information. ' '# ' '# Use 'blkid' to print the universally unique identifier for a ' '# device; this may be used with UUID= as a more robust way to name devices ' '# that works even if disks are added and removed. See fstab(5). ' '# ' '# ' proc /proc proc nodev,noexec,nosuid 0 0 '# / was on /dev/sdb1 during installation ' UUID=43d29a82-66b3-40f3-91ed-735a27a60004 / ext4 errors=remount-ro 0 1 '# swap was on /dev/sdb5 during installation UUID=cf8e3351-11d0-487a-8a6e-e499c2e88a10 none swap sw ' 0 0 Here is the output of mount with all drives mounted (I did not restore the share): /dev/sdb1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/tom/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tom) /dev/sdc1 on /media/L-Freeagent type ext4 (rw,nosuid,nodev,uhelper=udisks) /dev/sda1 on /media/D2Linux type ext4 (rw,nosuid,nodev,uhelper=udisks) Thank you!

    Read the article

  • How to avoid throwing vexing exceptions?

    - by Mike
    Reading Eric Lippert's article on exceptions was definitely an eye opener on how I should approach exceptions, both as the producer and as the consumer. However, I'm still struggling to define a guideline regarding how to avoid throwing vexing exceptions. Specifically: Suppose you have a Save method that can fail because a) Somebody else modified the record before you, or b) The value you're trying to create already exists. These conditions are to be expected and not exceptional, so instead of throwing an exception you decide to create a Try version of your method, TrySave, which returns a boolean indicating if the save succeeded. But if it fails, how will the consumer know what was the problem? Or would it be best to return an enum indicating the result, kind of Ok/RecordAlreadyModified/ValueAlreadyExists? With integer.TryParse this problem doesn't exist, since there's only one reason the method can fail. Is the previous example really a vexing situation? Or would throwing an exception in this case be the preferred way? I know that's how it's done in most libraries and frameworks, including the Entity framework. How do you decide when to create a Try version of your method vs. providing some way to test beforehand if the method will work or not? I'm currently following these guidelines: If there is the chance of a race condition, then create a Try version. This prevents the need for the consumer to catch an exogenous exception. For example, in the Save method described before. If the method to test the condition pretty much would do all that the original method does, then create a Try version. For example, integer.TryParse(). In any other case, create a method to test the condition.

    Read the article

  • Connecting to a VirtualBox machine from the host, using an ip address

    - by Doron
    Hello, In a macbook host, I run VirtualBox having a guest ubuntu server, with a NAT network setting. In the virtual machine application "Parallels", I would get on the host an IP address of the guest, to which I could later set hostnames and access it directly. However, I couldn't find a way to accomplish this using VirtualBox. The only solution VirtualBox has for me, is to set port forwarding, and access "localhost" with these ports. How can I have the desired behavior set up, without having to change to a bridged network settings, and expose my guest computer to the network my host computer is connected to ? Thanks.

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • Effortlessly resize images in Orchard 1.7

    - by Bertrand Le Roy
    I’ve written several times about image resizing in .NET, but never in the context of Orchard. With the imminent release of Orchard 1.7, it’s time to correct this. The new version comes with an extensible media pipeline that enables you to define complex image processing workflows that can automatically resize, change formats or apply watermarks. This is not the subject of this post however. What I want to show here is one of the underlying APIs that enable that feature, and that comes in the form of a new shape. Once you have enabled the media processing feature, a new ResizeMediaUrl shape becomes available from your views. All you have to do is feed it a virtual path and size (and, if you need to override defaults, a few other optional parameters), and it will do all the work for you of creating a unique URL for the resized image, and write that image to disk the first time the shape is rendered: <img src="@Display.ResizeMediaUrl(Path: img, Width: 59)"/> Notice how I only specified a maximum width. The height could of course be specified, but in this case will be automatically determined so that the aspect ratio is preserved. The second time the shape is rendered, the shape will notice that the resized file already exists on disk, and it will serve that directly, so caching is handled automatically and the image can be served almost as fast as the original static one, because it is also a static image. Only the URL generation and checking for the file existence takes time. Here is what the generated thumbnails look like on disk: In the case of those product images, the product page will download 12kB worth of images instead of 1.87MB. The full size images will only be downloaded as needed, if the user clicks on one of the thumbnails to get the full-scale. This is an extremely useful tool to use in your themes to easily render images of the exact right size and thus limit your bandwidth consumption. Mobile users will thank you for that.

    Read the article

  • Multicast hostname lookups on OSX

    - by KARASZI István
    I have a problem with hostname lookups on my OSX computer. According to Apple's HK3473 document it says for v10.6: Host names that contain only one label in addition to local, for example "My-Computer.local", are resolved using Multicast DNS (Bonjour) by default. Host names that contain two or more labels in addition to local, for example "server.domain.local", are resolved using a DNS server by default. Which is not true as my testing. If I try to open a connection on my local computer to a remote port: telnet example.domain.local 22 then it will lookup the IP address with multicast DNS next to the A and AAAA lookups. This causes a two seconds lookup timeout on every lookup. Which is a lot! When I try with IPv4 only then it won't use the multicast queries to fetch the remote address just the simple A queries. telnet -4 example.domain.local 22 When I try with IPv6 only: telnet -6 example.domain.local 22 then it will lookup with multicast DNS and AAAA again, and the 2 seconds timeout delay occurs again. I've tried to create a resolver entry to my /etc/resolver/domain.local, and /etc/resolver/local.1, but none of them was working. Is there any way to disable this multicast lookups for the "two or more label addition to local" domains, or simply disable it for the selected subdomain (domain.local)? Thank you! Update #1 Thanks @mralexgray for the scutil --dns command, now I can see my domain in the list, but it's late in the order: DNS configuration resolver #1 domain : adverticum.lan nameserver[0] : 192.168.1.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 resolver #3 domain : 254.169.in-addr.arpa options : mdns timeout : 2 order : 300200 resolver #4 domain : 8.e.f.ip6.arpa options : mdns timeout : 2 order : 300400 resolver #5 domain : 9.e.f.ip6.arpa options : mdns timeout : 2 order : 300600 resolver #6 domain : a.e.f.ip6.arpa options : mdns timeout : 2 order : 300800 resolver #7 domain : b.e.f.ip6.arpa options : mdns timeout : 2 order : 301000 resolver #8 domain : domain.local nameserver[0] : 192.168.1.1 order : 200001 Maybe it would work if I could move the resolver #8 to the position #2. Update #2 No probably won't work because the local DNS server on 192.168.1.1 answering for domain.local requests and it's before the mDNS (resolver #2). Update #3 I could decrease the mDNS timeout in /System/Library/SystemConfiguration/IPMonitor.bundle/Contents/Info.plist file, which speeds up the lookups a little, but this is not the solution.

    Read the article

  • Ubuntu 12.04 LTS won't install - never finishes please help

    - by Richard Higgins
    Want to try Ubuntu after using Windows for 30 years. Tried to install it 5 times on a Lenovo X120e notebook and twice on a Lenovo M57 desktop. No luck, worse than what Microsoft puts you through. I burned 12.04 LTS to disc. It installs up to the "Who Are You?" screen, then stops. Accepted the recommended computer name and lower case user name. I chose "log me in automatically." After that there is no progress bar, no rotating or pulsing button, nothing to indicate the Ubuntu has not died or fallen asleep. Is that how it is written? Never heard of a program that would take a long time to install while a user looked at a locked, dead screen. I just bought the M57 desktop for my son. It came with Ubuntu 10 something. I wanted to upgrade to 12.04 but it crashed, twice, to a DOS screen saying the pc lacked a certain "init" file. Various help screen commands did not help. On the X120e, I thought a partial-failed Ubuntu install was causing the problem, so I removed the drive and deleted the Ubuntu partition and replaced it. But same result. After I fill in my name, accept computer and user name, the "continue" button does not appear to work. I can go "back" but not forward. I have waited torturous hours. It doesn't take more than two hours to install, does it?any It is my own fault because of the high expectations I had for a sensible, hassle-free installation, but I am immensely disappointed. Thank you for any response

    Read the article

  • Server 2008 Likes to restart itself

    - by Campo
    I have a weird issue here. I notice about once a week the web server restarts itself. This would be only a minor issue if we were not planning on implementing an IP failover. I have checked the event logs. I don't see anything that indicates a reason for the restart. I need some help diagnosing the reason the server restarts. It happened last night at 5:00AM Last even in the log was 1 hour before the unexpected shutdown. Here is the Log for the shutdown event. Any help is much appreciated. I know there isn't much to go on yet. Log Name: System Source: EventLog Date: 5/5/2010 5:01:12 AM Event ID: 6008 Task Category: None Level: Error Keywords: Classic User: N/A Computer: SERVERNAME Description: The previous system shutdown at 4:56:41 AM on 5/5/2010 was unexpected. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="EventLog" /> <EventID Qualifiers="32768">6008</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-05T09:01:12.000Z" /> <EventRecordID>346094</EventRecordID> <Channel>System</Channel> <Computer>SERVERNAME</Computer> <Security /> </System> <EventData> <Data>4:56:41 AM</Data> <Data>5/5/2010</Data> <Data> </Data> <Data> </Data> <Data>39594</Data> <Data> </Data> <Data> </Data> <Binary>DA070500030005000400380029008E03DA070500030005000800380029008E033C0000003C000000000 000000000000000000000000000000100000000000000</Binary> </EventData> </Event>

    Read the article

  • backup software that ignores user rights

    - by Chris
    Hi, As a computer technician I have to reinstall systems allmost daily (when it can't be repaired ;-)) My problem is that I recover user files by hand and a external mounting device. Most of the time it works fine, but also weekly I have systems with passwords and personal files which are often not sucessfull recovered. I know you can change owner, but when people have 30 GB's af data, my backup computer works for ages to change the rights. Can anyone think of software (commercial is no problem) which does the following: * backup user data without having user rights troubles * have a option to choose what to backup (email accounts, documents, etc, etc) even when it's externaly mounted, in short, it reconizes the folder structure) * Works on different OS's like XP, Vista, W7

    Read the article

  • Using Telerik MVC with your own custom jQuery and or other plug-ins

    - by Steve Clements
    If you are using MVC it might be worth checking out the telerik controls (http://demos.telerik.com/aspnet-mvc), they are free if you are doing an internal or “not for profit” application. If however you do choose to use them, you could come up against a little problem I had.  Using the telerik controls with your own custom jQuery.  In my case I was using the jQuery UI dialog. It kept throwing an error where I was setting my div to a dialog. Code Snippet $("#textdialog").dialog({ The problem is when you use the telerik mvc stuff you need to call ScriptRegistrar Code Snippet @Html.Telerik().ScriptRegistrar() in order to setup the javascript for the controls. By default this adds a reference to jQuery and if you have already added a reference to jQuery because you are using it elsewhere, this causes a problem. I found the solution here And it was to change the above ScriptRegistrar call to this… Code Snippet @Html.Telerik().ScriptRegistrar().jQuery(false).DefaultGroup(g => g.Combined(true).Compress(true));   If you come across this one on stackoverflow it wont work – in my case the HtmlEditor would render no problem, but was unusable.  Which is the same as someone else found when using the tab control – they went to the bother of re-writing the ScriptRegistrar.  Not for me that one!!

    Read the article

  • What do encrypted files' data look like?

    - by Frost Shadow
    I know there are a lot of encryption programs available, that I would guess use different methods for encryption, and thus have different types of output files (.fve .tc .cha .dmg (bitlocker, truecrypt, challenger, ect.)), but if someone didn't know what the file was and just looked at the data, what would it look like? Does it just look like random bits, or can you still pick out a pattern? If it does look random, how is it if I moved the encrypted file to another computer, the other computer can tell it's a file, and is able to decrypt it (how would it even know where to start or stop, if it all looked random)? Also, how is the structure affected by encrypting files twice, using the same method, or a different one? Thanks for any help, and if you know any books or site about encryption for complete idiots, I'd appreciate it!

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >