Search Results

Search found 3137 results on 126 pages for 'digital signature'.

Page 96/126 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • No sound through headset - only mic is working

    - by Kristis
    I noticed that no sound is being played to my headphones. The laptop has a conexant sound card together with the sounds apps provided. But the thing is that I also noticed that now instead of one playback device - 2 are presented: speakers and headphones. And while speakers do play sound nicely - even test sound are not played through headphone output. Also, headphone output does not have a jack assigned to it, while speakers have L R Rear panel Analog Jack(my laptop does not have a jack on the back - only on the right). Also - my headphones have a mic as well - and when I plug it in - the mic is working(using top panel digital jack),but the headphones themselves are not. And laptop is recognizing when audio device is plugged in. And I checked the headset on other devices - the headphones are working. I have tried updating drivers, rolling back drivers and completely uninstalling drivers and then restarting - nothing helped. I imagine that I somehow need to reconfigure the jack configuration just have no idea how and where. Any suggestions? Thanks

    Read the article

  • SQL Authority News – Presenting at SQL Bangalore on May 3, 2014 – Performing an Effective Presentation

    - by Pinal Dave
    SQL Bangalore is a wonderful community and we always have a great response when we present on technology. It is SQL User Group and we discuss everything SQL there. This month we have SQL Server 2014 theme and we are going to have a community launch on this subject. We have the best of the best speakers presenting on SQL Server 2014 technology. Looking at the whole line of celebrity speakers, I have decided not to present on SQL Server. I will be presenting on the performance tuning subject, but with the twist of soft skills. I will be presenting on “Performing an Effective Presentation“. Trust me, you do not want to miss this presentation, I will be presenting on how to present effectively when presenting SQL Server topics. What this session will NOT have I personally believe that we all are good presenters most of the time. We can all easily call out if someone is bad presenter. There is no point talking about basics like bigger bullet points, talk loudly, talk with confidence, use better analogies etc. In simple words – this is not going to some philosophy session and boring notes. What this session will have Well, this session will tell stories of my life. It will tell how we can present about technology and SQL Server with the help of stories and personal experience. I am going to tell stories about two legends  who have inspired me. Right after that we will be doing two exercises together where we will learn quickly and effectively, how to become better speaker – instantly! There is no video recording of this session. If you want to get resources from this session, please sign up my newsletter at http://bit.ly/sqllearn Here are few of the slides from this presentation: Here is the details about the event and location Venue:Microsoft Corporation, Signature Building,Embassy Golf Links Business Park, Intermediate Ring Road, Domlur, Bangalore – 560071 The agenda is amazing – we have top line SQL Speakers. Everyone is welcome and don’t forget to get your friend along for this event. Loads to learn and tons to share !!! Keynote (20 mins) by Anupam Tiwari – Business Program Manager – GTSC Backup Enhancements with SQL Server 2014 by Amit Banerjee – PFE Microsoft Performance Enhancements with SQL Server 2014 by Sourabh Agarwal - PFE Microsoft LUNCH BREAK Performing an effective Presentation by Pinal Dave – Community Member (SQLAuthority.com) InMemory Enhancements with SQL Server 2014 by Balmukund Lakhani – Support Escalation Engg. Microsoft Some more lesser known enhancements with SQL Server 2014 by Vinod Kumar – Technical Architect Microsoft MTC Power Packed – Power BI with SQL Server by Kane Conway – Support Escalation Engg. Microsoft I am very big fan of Amit, Balmukund and Vinod – I have always watched their session and this time, I am going to once again attend their session without missing a single min. They are SQL legends, I am going to be there and learn when they are sharing their knowledge.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • NGINX Configuration Error using Codex Example: Is This a Typo in Codex?

    - by jw60660
    I installed NGINX using this tutorial: C3M Digital NGINX Tuturial but after reading this article on security issues with "cut and paste" configuration tutorials: Neal Poole's article regarding security and NGINX configuration I decided to follow Poole's suggestion to use the configuration suggested in the WordPress codex: Codex on NGINX Configuration I used the Codex configuration for a multisite installation using W3 Total Cache. When attempting to start NGINX I get an error saying that the /etc/nginx/nginx.conf test failed. The error message was: "Restarting nginx: nginx: [emerg] unknown directive "//" in /etc/nginx/sites-enabled/teambrazil.com:18" When I looked at my site specific configuration at that path I noticed the rewrite rule in the server block was: rewrite ^ $scheme://teambrazil.conf$request_uri redirect; That line in the Codex example was: rewrite ^ $scheme://mysite.conf$request_uri redirect; That looked like a mistake to me, and I changed my line to: rewrite ^ $scheme://teambrazil.com$request_uri redirect; I then attempted to restart NGINX but got the same error message. My question is: is that a mistake, and is there anything more I have to do aside from restarting NGINX after making this change. As suggested by both tutorials I set up the directories: /etc/nginx/sites-enabled and /etc/nginx/sites-available and created the appropriate symbolic links using: touch /etc/nginx/sites-available/teambrazil.com ln -s /etc/nginx/sites-available/teambrazil.com /etc/nginx/sites-enabled/teambrazil.com Is there something else I need to consider after making this correction? Or was it not an error in the first place? I'm pretty stuck here. BTW, I am using Debian squeeze as an OS on Amerinoc's VPS. I'm just getting familiar with VPS administration and am pretty much a noob. Thanks very much, would appreciate any input.

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • Can I connect a Playstation 3's HDMI output to my monitor's DVI-D input? [migrated]

    - by HankJDoomstorm
    I'm attempting to connect my Playstation 3 to my computer monitor. The monitor has a DVI-D (dual link) input, so before distinguishing between the different DVI varieties, I bought a DVI-I (dual link) to HDMI converter that won't fit into the port on the monitor (not only that, there isn't enough physical space in the back of the monitor to fit that much stuff before it hits the bottom of it). So I grabbed a DVI-D (single link) cable and got a female-to-female DVI-I coupler, and plugged the DVI-D cable into the monitor and the whole mess of converters. The end result was HDMI to DVI-D single link, but my monitor isn't receiving a signal on its digital channel. (For clarity's sake: DVI-D DL input on Monitor, DVI-D SL cable, DVI-I DL female-to-female coupler, DVI-I DL to HDMI converter, HDMI output on PS3) I don't know much about this stuff (obviously), but my educated guess is that the bandwidth of the PS3 is too high for the DVI-D Single Link cable, so nothing's getting through. Will replacing the single link cable with dual link resolve this? If not, is it possible at all? Oh, I should mention I'm aware I won't get audio through the monitor. I have an RCA to 3.5mm converter for that.

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • Windows 7 hangs after going into sleep a second time

    - by Brian Stephenson
    I've searched everywhere around Google and can't figure out why this is happening so I decide to ask here to see if anyone has a problem like this. Like it says in the title, whenever I sleep ONCE I'm able to wake the system, but going back to sleep again AFTER waking up for the first time results in it hanging on no input and no output, with the fan spinning as fast as possible and alot of heat being spewed out by the fan as well. I've tried various things like setting all USB Hub Root's to not get switched off for power saving, disabling USB selective suspend, disabling PCI-e link state power management, and even unplugging ALL USB devices and it wont wake up after the second attempt. And I've even waited up to a full hour of the CPU fan spinning loudly and it's still stuck trying to wake up. The only USB devices I use are a Microsoft USB Comfort Curve Keyboard 2000 (IntelliType Pro) and a generic HID compliant mouse from Creative model number OMC90S "CREATIVE MOUSE OPTICAL LITE". My other devices like external drives and controllers are unplugged when I'm not using them as having too many USB devices plugged in at a time causes a deadlock on almost all of the ports I have. Here's my system specifications (Most of these are from CPU-Z): Brand: Gateway DX4300-19 Mainboard: Gateway RS780 Chipset: AMD 780G Rev 00 Southbridge: AMD SB700 Rev 00 LPCIO: ITE IT8718 BIOS: American Megatrends Inc. ver P01-A4 09/15/2009 CPU: AMD Phenom II X4 810 at 2.60 GHz RAM: 8.0 GB DDR2 Dual Channel Ganged Mode at 400 MHz GPU: ATI Radeon HD3200 Graphics Intergrated - RS780 OS: Windows 7 Home Premium x64 OEM (Acer Group) HDD: WDC WD10EADS-22M2B0 1.0 TB (Western Digital Green Caviar) My BIOS has absolutely no control over how I setup the sleep mode to be either S1 or S3. So I can't check these settings or even change them. Hybrid sleep is also disabled, I can successfully go into hibernation and wake from hibernation but this is painfully slow due to a harddrive problem I'm having with this "Green Drive". (Hibernation takes over ~3 minutes to complete) Any help would be appreciated, thanks.

    Read the article

  • Should I embed the sRGB color profile in JPEG files?

    - by basic6
    I have a large (growing) collection of scanned images. They are TIFF files, mostly 48 bit with the Adobe RGB color space. This color profile is integrated in the files. When such a file is opened in IrfanView (with plugins), it says (Image - Information) Adobe RGB 1998. "Normal images", like the JPG files from a digital camera, do not (necessarily) have a color profile integrated in the file. I understand that it's necessary to include the Adobe RGB profile in an image file which uses the Adobe RGB space, so the color values can be interpreted correctly. Here's a test image with a completely different color profile, programs that ignore the included profile (like MSIE8 or Gwenview) will render it as sRGB (?): I'm planning to convert my TIF files to JPG, so I'm wondering if there's anything wrong with using IrfanView that would save them as sRGB without embedding the sRGB profile. I've heard that images should always be saved with the color profile included. Since every image seems to be interpreted as sRGB by default (by software without color management), I don't understand why the sRGB profile should be included?

    Read the article

  • ubuntu 9.10 installer doesn't recognize the hard drive

    - by dan
    I downloaded Ubuntu 9.10 x86_64 and am trying to install it on a fairly modern system with a Gigabyte GA-MA770-UD3 motherboard. Ubuntu 9.04 installed fine and still will when I stick that disc in, but 9.10 doesn't see my hard drive (western digital 250GB). If I boot from the disc, I can install gparted and it does recognize the drive, but when I try to start the install process from the live disc, Ubuntu again doesn't recognize the hard drive. I checked /var/log/messages and see this: Nov 12 17:28:08 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was bad, boot with 'nodmraid'. Nov 12 17:28:08 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: no raid sets and with names: "nvidia_ciiajheb-0" Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. I checked my BIOS, SATA is enabled and is set to IDE mode, so there shouldn't be software RAID, but nonetheless, I added nodmraid to the boot line and tried again. It still doesn't recognize the drive. I checked /var/log/messages again and now see this: Nov 12 17:49:38 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was boad, boot with 'nodmraid'. Nov 12 17:49:38 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Any ideas on things to try? I've tried all of the various BIOS settings for SATA. IDE,RAID, etc. Nothing seems to work.

    Read the article

  • Java: very slow tomcat and too big war file

    - by NaN
    I created some sort of RESTful API backend for a mobile app. It's written completely in Java using Jersey as Framework. At the moment no database is used, it's all in the memory, but this is no problem so far (it's only for prototyping purposes). I ordered the smallest package from digital ocean and installed tomcat7. All in all tomcat works, but I have three major problems: 1) It takes a long time until tomcat deploys the app: I deploy it per tomcat manager and it takes about 2 minutes unit the site works (excl. war upload time). 2) The war files are quite big (16MB): I don't know why they are so big. There are no database dependencies and most logic is written in plain java. Okay, we are using jersey, but 16MB are a lot for the logic of a small webservice. 3) I have to restart tomcat all 3 days or so. It looks like a memory leak or something similar. If the app runs for a few days the response time is quite high and the server seems to be frozen. It works again, if I restart tomcat per ssh. You can find my mvn pom file right here. Do you have some tips? Are there good tomcat alternatives?

    Read the article

  • I am trying to figure out the best way to understand how to cache domain objects

    - by Brett Ryan
    I've always done this wrong, I'm sure a lot of others have too, hold a reference via a map and write through to DB etc.. I need to do this right, and I just don't know how to go about it. I know how I want my objects to be cached but not sure on how to achieve it. What complicates things is that I need to do this for a legacy system where the DB can change without notice to my application. So in the context of a web application, let's say I have a WidgetService which has several methods: Widget getWidget(); Collection<Widget> getAllWidgets(); Collection<Widget> getWidgetsByCategory(String categoryCode); Collection<Widget> getWidgetsByContainer(Integer parentContainer); Collection<Widget> getWidgetsByStatus(String status); Given this, I could decide to cache by method signature, i.e. getWidgetsByCategory("AA") would have a single cache entry, or I could cache widgets individually, which would be difficult I believe; OR, a call to any method would then first cache ALL widgets with a call to getAllWidgets() but getAllWidgets() would produce caches that match all the keys for the other method invocations. For example, take the following untested theoretical code. Collection<Widget> getAllWidgets() { Entity entity = cache.get("ALL_WIDGETS"); Collection<Widget> res; if (entity == null) { res = loadCache(); } else { res = (Collection<Widget>) entity.getValue(); } return res } Collection<Widget> loadCache() { // Get widgets from underlying DB Collection<Widget> res = db.getAllWidgets(); cache.put("ALL_WIDGETS", res); Map<String, List<Widget>> byCat = new HashMap<>(); for (Widget w : res) { // cache by different types of method calls, i.e. by category if (!byCat.containsKey(widget.getCategory()) { byCat.put(widget.getCategory(), new ArrayList<Widget>); } byCat.get(widget.getCatgory(), widget); } cacheCategories(byCat); return res; } Collection<Widget> getWidgetsByCategory(String categoryCode) { CategoryCacheKey key = new CategoryCacheKey(categoryCode); Entity ent = cache.get(key); if (entity == null) { loadCache(); } ent = cache.get(key); return ent == null ? Collections.emptyList() : (Collection<Widget>)ent.getValue(); } NOTE: I have not worked with a cache manager, the above code illustrates cache as some object that may hold caches by key/value pairs, though it's not modelled on any specific implementation. Using this I have the benefit of being able to cache all objects in the different ways they will be called with only single objects on the heap, whereas if I were to cache the method call invocation via say Spring It would (I believe) cache multiple copies of the objects. I really wish to try and understand the best ways to cache domain objects before I go down the wrong path and make it harder for myself later. I have read the documentation on the Ehcache website and found various articles of interest, but nothing to give a good solid technique. Since I'm working with an ERP system, some DB calls are very complicated, not that the DB is slow, but the business representation of the domain objects makes it very clumsy, coupled with the fact that there are actually 11 different DB's where information can be contained that this application is consolidating in a single view, this makes caching quite important.

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

  • Growing a Linux software RAID5 array

    - by chrismetcalf
    On my home file server, I've got a 1.5TB software RAID5 array, built from four 500gb Western Digital drives. I've got a fifth drive that I usually run as a hot spare (but have out of the array at the moment), but if I can I'd like to add that to the array and grow it to 2TB since I'm running out of space. I Googled for guidance, but there seem to be a lot of differing opinions out there (many of them probably now out-of-date) as to whether or not that is possible and/or smart. What's the right way to go about this, or should I start looking into building a new array with more space? Version details: %> cat /etc/issue Debian GNU/Linux 5.0 \n \l %> uname -a Linux magrathea 2.6.26-1-686-bigmem #1 SMP Sat Jan 10 19:13:22 UTC 2009 i686 GNU/Linux %> /sbin/mdadm --version mdadm - v2.6.7.2 - 14th November 2008 %> cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid1 hdc1[0] hdd1[1] 293033536 blocks [2/2] [UU] md0 : active raid5 sde1[3] sda1[0] sdc1[2] sdb1[1] 1465151808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Power supply switch like stays off motherboard light turns on

    - by Sion
    I bought a computer at the thrift store yesterday. The computer powered on without any error beeps. Getting it back to the house determined that the CD and hard drive needed to be changed. Put in a populated hard drive to check, the computer turned on and seemed to function. Put in a new CD drive, and just put in a new Hard drive. I plugged it in to check and I noticed that the light for the power supply switch did not come on. But I did notice that the light on the motherboard is lit. and I could not turn the computer on. To help troubleshoot it I unplugged the CD and Hard drive. then re-plugged the power supply and switched it on and off. Nothing changed. Parts: Motherboard: Digital Home PSW DH deluxe Power Supply: FSP-Group FX700-GLN Did I accidentally unplug something while installing the hard drive? Is the Power supply fried somehow?

    Read the article

  • Error compiling PHP 5.5.9 on CentOS 6.5 during make command

    - by Chris Mancini
    Here is the error message: cc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions. make: *** [ext/fileinfo/libmagic/apprentice.lo] Error 1 The very last thing make was processing is apprentice.lo which appears to be part of the image manipulation libraries (maybe?). I am using Ansible to provision my instance. It is a Digital Ocean single core 512MB VM. I have been using vagrant / ansible with the same config locally for dev and it has compiled fine, this is the first cloud VM I am attempting to provision. The only difference is the base image for my DO server is coming from DO and for my local dev, I built my own Vagrant box via VirtualBox from a stock CentOS basic server install. I pull it down from my DropBox. The problem has been experienced by others and reported as a php bug report My php ansible role up to the error: --- - name: Download php source get_url: url={{ php_source_url }} dest=/tmp register: get_url_result - name: untar the source package command: tar -xvf php-{{ php_version }}.tar.gz chdir=/tmp when: get_url_result.changed or php_reinstall - name: configure php 5.5 command: > ./configure --prefix={{ php_prefix }} --with-config-file-path={{ php_config_file_path }} --enable-fpm --enable-ftp --enable-mbstring --enable-pdo --enable-soap --enable-sockets=shared --enable-zip --with-curl --with-fpm-group={{ nginx_group }} --with-fpm-user={{ nginx_user }} --with-freetype-dir=/usr/lib64/ --with-gd --with-jpeg-dir=/usr/lib64/ --with-libdir=lib64 --with-mcrypt --with-openssl --with-pdo-mysql --with-pear --with-readline --with-tidy --with-xsl --with-zlib --without-pdo-sqlite --without-sqlite3 chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall - name: make clean when reinstalling command: make clean chdir=/tmp/php-{{ php_version }} when: php_reinstall - name: make php command: make chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall Thanks in advance for any help. :)

    Read the article

  • USB Harddisk not working on dual boot windows7/8

    - by Jesper
    Yesterday I installed Windows 8 on a machine that already had Windows 7. They are on dual boot and both systems work fine. The problem is that inserting a USB hard disk in either system does nothing. If I connect a USB mouse or mobile phone, they work fine, so the USB plugs are active/working and the USB hard drives that I am trying to connect work on my other laptop just fine. I have tried to uninstall all USB-related items in Device Manager and let them reinstall upon restart, but that didn't help. The USB drive does not show up in disk management either. The strange thing is that it is exactly the same situation on both windows. USB mice etc. work just fine and USB hard drives do not. Any ideas on solving this problem would be great. ...Don't know if it is important, but this is a Toshiba Tecra R950 Laptop. EDIT: I have found out that my other USB HD (Western Digital) works on this laptop, but for my StoreJet Transcend and Adata "something" does not work. All three work on another Windows 7 laptop. Sizewise the WD is in the middle at 400 GB. The StoreJet is 640 GB and the Adata is 200 GB.

    Read the article

  • HP Laptop recognizes hard drive just long enough to install windows

    - by Joe
    I have an HP laptop, DV6500 (CTO). It refused to boot one day, so I ran some diagnostics (a friend lent me "Hirens Boot Disk", "UBCD" and "PC DR 6"). Everything passed, except for the hdd. I replaced the HDD with a used drive of unknown condition. Installed windows with no problems. Installed the wireless driver, tried to reboot ... no luck. So I went to Best Buy, bought a brand new Western Digital 320gb HDD. Put it in the machine, installed windows (vista home premium). Installed the wired networking driver. Tried to reboot. No luck. Put the first hdd back in the machine, reinstalled windows. Started to install some drivers, went to reboot, and the machine won't come back to life. Put the second hdd in the machine, rinse wash and repeat. I've replaced the memory, even though it passed diagnostics. Problem exists with both brand new memory, and old memory. The BIOS recognizes the hard drive. The computer freezes directly after the bios splash screen, and there is no hard drive activity light. I've tried two linux live distros (gentoo and ubuntu). Neither would run on this laptop, but will on a different HP laptop. UBCD and Hirens Boot Disk both ran, as did PC Doctor 6 which refuses to test anything (gets stuck at "enumerating hard disks"). Is there anything else I can try?

    Read the article

  • Parameterized StreamInsight Queries

    - by Roman Schindlauer
    The changes in our APIs enable a set of scenarios that were either not possible before or could only be achieved through workarounds. One such use case that people ask about frequently is the ability to parameterize a query and instantiate it with different values instead of re-deploying the entire statement. I’ll demonstrate how to do this in StreamInsight 2.1 and combine it with a method of using subjects for dynamic query composition in a mini-series of (at least) two blog articles. Let’s start with something really simple: I want to deploy a windowed aggregate to a StreamInsight server, and later use it with different window sizes. The LINQ statement for such an aggregate is very straightforward and familiar: var result = from win in stream.TumblingWindow(TimeSpan.FromSeconds(5))               select win.Avg(e => e.Value); Obviously, we had to use an existing input stream object as well as a concrete TimeSpan value. If we want to be able to re-use this construct, we can define it as a IQStreamable: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value)); The DefineStreamable API lets us define a function, in our case from a IQStreamable (the input stream) and a TimeSpan (the window length) to an IQStreamable (the result). We can then use it like a function, with the input stream and the window length as parameters: var result = avg(stream, TimeSpan.FromSeconds(5)); Nice, but you might ask: what does this save me, except from writing my own extension method? Well, in addition to defining the IQStreamable function, you can actually deploy it to the server, to make it re-usable by another process! When we deploy an artifact in V2.1, we give it a name: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); When connected to the same server, we can now use that name to retrieve the IQStreamable and use it with our own parameters: var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); var result = averageQuery(stream, TimeSpan.FromSeconds(5)); Convenient, isn’t it? Keep in mind that, even though the function “AverageQuery” is deployed to the server, its logic will still be instantiated into each process when the process is created. The advantage here is being able to deploy that function, so another client who wants to use it doesn’t need to ask the author for the code or assembly, but just needs to know the name of deployed entity. A few words on the function signature of GetStreamable: the last type parameter (here: double) is the payload type of the result, not the actual result stream’s type itself. The returned object is a function from IQStreamable<SourcePayload> and TimeSpan to IQStreamable<double>. In the next article we will integrate this usage of IQStreamables with Subjects in StreamInsight, so stay tuned! Regards, The StreamInsight Team

    Read the article

  • VMWare Workstation 8 Disk I/O & Hard Faults

    - by Scott
    I have VMWare Workstation 8 installed on a host machine with the following specs: Intel i5 2500k CPU 16GB DDR3 1600 ram 1TB Western Digital Caviar Black HD I have two Windows 7 virtual machines configured (currently running one at a time but will be operating both at once when my 32GB RAM kit arrives in a couple days). Each one is configured with 8GB of RAM and no tweaks/performance customizations or anything done. All of the VMWare settings are the defaults. When I boot into these machines and run various programs (Visual Studio, Outlook, etc), I can hear the disk thrashing quite a bit and checking Resource Monitor, I can see that I'm getting anywhere between 300-800 hard faults per second. From the host machine, it shows they're coming from the VMWare image. If I go to the virtual machine, whatever app I'm currently loading is the image that's causing the hard faults. As I understand it, hard faults are (simply) when an address in memory has been swapped out to the page file and has to be read from the page file instead of from memory. I don't understand why this is happening though. With 8GB of ram on the guest machine and 6.5GB available, what could be causing this? I know Windows 7 supposedly improved on page file management over XP but it seems excessive for this kind of slowdown, disk thrashing and high hard fault count when I have that much free RAM. Is there anything I can to to improve the performance in my guest machines? On the host machine, I can open/run any applications at all and hard faults stays around 0 with low disk I/O.

    Read the article

  • C#&ndash;Using a delegate to raise an event from one class to another

    - by Bill Osuch
    Even though this may be a relatively common task for many people, I’ve had to show it to enough new developers that I figured I’d immortalize it… MSDN says “Events enable a class or object to notify other classes or objects when something of interest occurs. The class that sends (or raises) the event is called the publisher and the classes that receive (or handle) the event are called subscribers.” Any time you add a button to a Windows Form or Web app, you can subscribe to the OnClick event, and you can also create your own event handlers to pass events between classes. Here I’ll show you how to raise an event from a separate class to a console application (or Windows Form). First, create a console app project (you could create a Windows Form, but this is easier for this demo). Add a class file called MyEvent.cs (it doesn’t really need to be a separate file, this is just for clarity) with the following code: public delegate void MyHandler1(object sender, MyEvent e); public class MyEvent : EventArgs {     public string message; } Your event can have whatever public properties you like; here we’re just got a single string. Next, add a class file called WorkerDLL.cs; this will simulate the class that would be doing all the work in the project. Add the following code: class WorkerDLL {     public event MyHandler1 Event1;     public WorkerDLL()     {     }     public void DoWork()     {         FireEvent("From Worker: Step 1");         FireEvent("From Worker: Step 5");         FireEvent("From Worker: Step 10");     }     private void FireEvent(string message)     {         MyEvent e1 = new MyEvent();         e1.message = message;         if (Event1 != null)         {             Event1(this, e1);         }         e1 = null;     } } Notice that the FireEvent method creates an instance of the MyEvent class and passes it to the Event1 handler (which we’ll create in just a second). Finally, add the following code to Program.cs: static void Main(string[] args) {     Program p = new Program(args); } public Program(string[] args) {     Console.WriteLine("From Console: Creating DLL");     WorkerDLL wd = new WorkerDLL();     Console.WriteLine("From Console: Wiring up event handler");     WireEventHandlers(wd);     Console.WriteLine("From Console: Doing the work");     wd.DoWork();     Console.WriteLine("From Console: Done - press any key to finish.");     Console.ReadLine(); } private void WireEventHandlers(WorkerDLL wd) {     MyHandler1 handler = new MyHandler1(OnHandler1);     wd.Event1 += handler; } public void OnHandler1(object sender, MyEvent e) {     Console.WriteLine(e.message); } The OnHandler1 method is called any time the event handler “hears” an event matching the specified signature – you could have it log to a file, write to a database, etc. Run the app in debug mode and you should see output like this: You can distinctly see which lines were written by the console application itself (Program.cs) and which were written by the worker class (WorkerDLL.cs). Technorati Tags: Csharp

    Read the article

  • How to upgrade a remote server from 8.10 to newer version?

    - by DisgruntledGoat
    I have a remote server still running Ubuntu 8.10 9.04 that I can only access via SSH. If I run apt-get update I get a bunch of 404 errors on the packages. I've asked a few questions on Server Fault but got nowhere. Here's what I've done: Run apt-get update which returns errors like: Err http://gb.archive.ubuntu.com intrepid/main Packages 404 Not Found [and same for many other packages] Run do-release-upgrade which returns: Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Edited /etc/update-manager/release-upgrades and changed from Prompt=normal to Prompt=lts (as suggested here). Running do-release-upgrade after this returns: Checking for a new ubuntu release current dist not found in meta-release file No new release found (Updated) I have followed the advice in this question and changed /etc/apt/sources.list to refer to jaunty instead of intrepid. However, that distro is not online anymore either. A comment there says I have to upgrade in chronological order... So basically, it seems like I cannot upgrade because my current distro is out of date and not supported. Is there a way to upgrade direct to 10.x or 11.x? Note, as this is a server I only have command-line access. UPDATE 24/11: I have managed to upgrade from 8.10 to 9.04. Ubuntu's EOL Upgrades page provides some alternate URLs for apt sources. I also needed to update /var/lib/update-manager/meta-release to point to the old-releases server too. However, now I cannot upgrade from 9.04 to 9.10. Running do-release-upgrade produces the same error as #2 above, except it "Failed to fetch" (the URLs in meta-release are valid). The Ubuntu Jaunty upgrade page says it's necessary to upgrade using a CD image. I followed the instructions here, but it didn't work: A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade is now aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmp.JLhTwVUugb/karmic", line 7, in sys.exit(main()) File "/tmp/tmp.JLhTwVUugb/DistUpgradeMain.py", line 132, in main if app.run(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1590, in run return self.fullUpgrade() File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1506, in fullUpgrade if not self.doPostInitialUpdate(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 762, in doPostInitialUpdate self.quirks.run("PostInitialUpdate") File "/tmp/tmp.JLhTwVUugb/DistUpgradeQuirks.py", line 83, in run for plugin in self.plugin_manager.get_plugins(condition): File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 167, in get_plugins filenames = self.get_plugin_files() File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 120, in get_plugin_files basenames = [x for x in os.listdir(dirname) OSError: [Errno 2] No such file or directory: './plugins' It does say to report the bug, but since this is an old unsupported release I don't know if it's worth doing. However, is there a way round this, to upgrade from 9.04 to 9.10 (And then finally to 10.04 LTS.)

    Read the article

  • Installing Windows 7 from a USB Hard Drive.

    - by Mark Tomlin
    I have a Western Digital Passport External Hard Drive (320GB) that I want to partition to keep the data on, but use some of the free space to install Windows 7 onto my desktop computer. Microsoft has given me the Windows 7 Enterprise Edition ISO to download. I would like to take the External HD and partition it so I can fit the ISO image onto it. How would I go about doing this? Trying to use GParted to partition the external hard drive has caused a chicken or the egg problem. GParted can't see the drive unless it's mounted, and when it is mounted it will not allow me to do anything to the partition. When it's not mounted, GParted can't see the drive at all and as such can't do anything to the drive. Once the drive is correctly partition, how do I go about moving the ISO image Microsoft gave me to my USB External Hard Drive? Are there any special steps that I need to take? I am using Ubuntu 11.04 & GParted 0.7.0, on my Chromebook to do this. Any support would be appreciated.

    Read the article

  • SSD as primary or secondary drive on a small Linux server?

    - by Alex Martelli
    I'm pensioning off my 10-years-old home server and replacing it with an Ubuntu 10.04 box. The two storage devices are a Western Digital Caviar Green 2.0TB HD and an Intel X25-M 34nm Gen 2 80GB SATA II 2.5inch SSD (the box has 8GB RAM and an i5 750, if it matters). I don't care much about boot times (since I don't plan to reboot all that often;-); the main frequent, performance-demanding task will be (re)building large open source C or C++ software packages from sources (as an open source contributor, I do that often). So, I thought I'd keep the SSD as the secondary drive and the HD as the primary one, using the SSD mostly for the files that can otherwise demand a lot of seeking (esp. in a parallel make). However, the friendly vendor (perhaps more experienced in Windows systems than in Linux ones) thinks the "normal" way to configure the machine would be with the SSD as the primary drive. I'm pretty rusty on configuring and tuning systems, so, I thought I'd better double check on SuperUser... thanks in advance for advice about this choice!

    Read the article

  • Dependency Injection Introduction

    - by MarkPearl
    I recently was going over a great book called “Dependency Injection in .Net” by Mark Seeman. So far I have really enjoyed the book and would recommend anyone looking to get into DI to give it a read. Today I thought I would blog about the first example Mark gives in his book to illustrate some of the benefits that DI provides. The ones he lists are Late binding Extensibility Parallel Development Maintainability Testability To illustrate some of these benefits he gives a HelloWorld example using DI that illustrates some of the basic principles. It goes something like this… class Program { static void Main(string[] args) { var writer = new ConsoleMessageWriter(); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } } public interface IMessageWriter { void Write(string message); } public class ConsoleMessageWriter : IMessageWriter { public void Write(string message) { Console.WriteLine(message); } } public class Salutation { private readonly IMessageWriter _writer; public Salutation(IMessageWriter writer) { _writer = writer; } public void Exclaim() { _writer.Write("Hello World"); } }   If you had asked me a few years ago if I had thought this was a good approach to solving the HelloWorld problem I would have resounded “No”. How could the above be better than the following…. class Program { static void Main(string[] args) { Console.WriteLine("Hello World"); Console.ReadLine(); } }  Today, my mind-set has changed because of the pain of past programs. So often we can look at a small snippet of code and make judgements when we need to keep in mind that we will most probably be implementing these patterns in projects with hundreds of thousands of lines of code and in projects that we have tests that we don’t want to break and that’s where the first solution outshines the latter. Let’s see if the first example achieves some of the outcomes that were listed as benefits of DI. Could I test the first solution easily? Yes… We could write something like the following using NUnit and RhinoMocks… [TestFixture] public class SalutationTests { [Test] public void ExclaimWillWriteCorrectMessageToMessageWriter() { var writerMock = MockRepository.GenerateMock<IMessageWriter>(); var sut = new Salutation(writerMock); sut.Exclaim(); writerMock.AssertWasCalled(x => x.Write("Hello World")); } }   This would test the existing code fine. Let’s say we then wanted to extend the original solution so that we had a secure message writer. We could write a class like the following… public class SecureMessageWriter : IMessageWriter { private readonly IMessageWriter _writer; private readonly string _secretPassword; public SecureMessageWriter(IMessageWriter writer, string secretPassword) { _writer = writer; _secretPassword = secretPassword; } public void Write(string message) { if (_secretPassword == "Mark") { _writer.Write(message); } else { _writer.Write("Unauthenticated"); } } }   And then extend our implementation of the program as follows… class Program { static void Main(string[] args) { var writer = new SecureMessageWriter(new ConsoleMessageWriter(), "Mark"); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } }   Our application has now been successfully extended and yet we did very little code change. In addition, our existing tests did not break and we would just need add tests for the extended functionality. Would this approach allow parallel development? Well, I am in two camps on parallel development but with some planning ahead of time it would allow for it as you would simply need to decide on the interface signature and could then have teams develop different sections programming to that interface. So,this was really just a quick intro to some of the basic concepts of DI that Mark introduces very successfully in his book. I am hoping to blog about this further as I continue through the book to list some of the more complex implementations of containers.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >