Search Results

Search found 4054 results on 163 pages for 'surround sound'.

Page 135/163 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Stop the constant random reboots of my GIGABYTE GA-B75M-D3V

    - by Frederic
    I've got some issues with a new system. It's rebooting constantly. The system consists of a: brand new: Gigabyte GA-B75M-D3V with F9 BIOS (latest) Intel Core i5-3470 Ivy Bridge 2x 8GB G.SKILL Ripjaws 1600MHz memory (mem-tested x-86) coming from a stable system: Creative Soundcard X-FI Titanium Asus Radeon HD4850 OCZ Vertex 3 120G SSD Sata 3 Hard disk 1TB Sata 2 ASUS Blu-ray Drive PSU 400w Connected peripherals : Toshiba tv (displayport on dvi of MB or HD4850) Wired mouse, wireless keyboard (logitech) Bluetooth usb key Azio main problem : it's not possible to read the errors from the MB. nothing on the manual neither on internet. At the beginning, I received a MB with graphic problems and the problem of rebooting. I RMA'd it. The new one doesn't have any graphic problems. but it's still constantly rebooting. I removed everything except the HD, the sound-card, the blu-ray drive and the wireless keyboard. It's still unexpectdly rebooting. I'm running a test with just the motherboard and the HD. I will update this text after the test. I've got some questions : Somebody have an idea of a test? The PSU could cause that problem? I used it a lot of years with the stable system. Update 1: BTW, if anyone has the same problem, the manual won't say it but you'll need to reset the bios between two tests (the screwdriver on the two pins) if you suspect a problem of compatibility .

    Read the article

  • convert decrypted .vobs to .avi with ffmpeg on ubuntu

    - by Arcath
    I have a .vob file that has bee ripped from a dvd, when I watch the .vob its very good quality video and 5.1 english audio but when I use ffmpeg it has rubbish video and mono french audio. That was using this command: ffmpeg -i /samba/ripping/vobs/12161840#2.vob -f avi /samba/ripping/avis/test.avi I've tried a few different variations on that but it never comes back with anything good just bigger files with bad video and incorrect sound. I know the videos good and the correct audio streams exist so how do I select a 5.1 track and get good video? ffmpeg gives the .vob details as: Input #0, mpeg, from '/samba/ripping/vobs/12161840#2.vob': Duration: 00:42:05.56, start: 0.287267, bitrate: 5738 kb/s Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720x576 [PAR 64:45 DAR 16:9], 8436 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0.1[0x80]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.2[0x81]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.3[0x82]: Audio: ac3, 48000 Hz, mono, s16, 192 kb/s Output #0, avi, to '/samba/ripping/avis/test.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 720x576 [PAR 64:45 DAR 16:9], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, mono, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.3 -> #0.1

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Simulating audio playback on headless linux server

    - by afro
    Hi people, We have a headless linux server (Debian 5) we use for runnin integration tests of our web-page code. Among these tests are ones implemented using Selenium, which practically simulates a user browsing our pages and clicking on things. One of these tests is failing now, because it involves starting a flash-based audio player and checking to see whether the progress bar gets displayed properly. The reason this test fails is that there is no way to play the audio, and no sound card on the machine, which has simple webserver hardware. So, my question would be: Is there a simple way of giving a program the impression that its audio output is being processed, and playback is taking place? I don't have to record the playback, or redirect it or anything like that, just a dummy soundcard, like the dummy X-server we aer using, which actually does not need to display stuff. I have tried using JACK, but it's too complicated, and the documentation does not even answer this very simple question. I also installed alsa on the server; it 'pretends' to run, but when a program tries to play audio, just spews error and debug information having to do with the non-existence of a soundcard. It would be really awesome if one of you has a simple answer to this question. Cheers, Ulas

    Read the article

  • HP Proliant DL380 G4 - Can this server still perform in 2011?

    - by BSchriver
    Can the HP Proliant DL380 G4 series server still perform at high a quality in the 2011 IT world? This may sound like a weird question but we are a very small company whose primary business is NOT IT related. So my IT dollars have to stretch a long way. I am in need of a good web and database server. The load and demand for a while will be fairly low so I am not looking nor do I have the money to buy a brand new HP Dl380 G7 series box for $6K. While searching around today I found a company in ATL that buys servers off business leases and then stripes them down to parts. They clean, check and test each part and then custom "rebuild" the server based on whatever specs you request. The interesting thing is they also provide a 3-year warranty on all their servers they sell. I am contemplating buying two of the following: HP Proliant DL380 G4 Dual (2) Intel Xeon 3.6 GHz 800Mhz 1MB Cache processors 8GB PC3200R ECC Memory 6 x 73GB U320 15K rpm SCSI drives Smart Array 6i Card Dual Power Supplies Plus the usual cdrom, dual nic, etc... All this for $750 each or $1500 for two pretty nicely equipped servers. The price then jumps up on the next model up which is the G5 series. It goes from $750 to like $2000 for a comparable server. I just do not have $4000 to buy two servers right now. So back to my original question, if I load Windows 2008 R2 Server and IIS 7 on one of the machines and Windows 2008 R2 server and MS SQL 2008 R2 Server on another machine, what kind of performance might I expect to see from these machines? The facts is this series is now 3 versions behind the G7's and this series of server was built when Windows 200 Server was the dominant OS and Windows 2003 Server was just coming out. If you are running Windows 2008 R2 Server on a G4 with similar or less specs I would love to hear what your performance is like.

    Read the article

  • Migrate users from one Active Directory domain to another?

    - by Matt
    I work for a company that hosts desktops for a number of different companies. At the moment, all the clients access a single domain controller called HOSTING. Under that are groups for each company. Each of the hosting servers exist on the same network and so are therefore potentially browseable by other terminal servers. This has raised some security issues and I've found it a little tricky to manage the security. As well, it's possible to see who the other hosted companies are even though other users cannot see their data. What I'd like to do is isolate each clients terminal server/s into their own VLAN. In addition, I'm thinking that each TS would have it's own DC which could just run on the TS for that company. Overhead for a DC is fairly minimal. This would isolate users on that TS from seeing the other companies completely. Firstly, does this sound like a sensible plan? Second... if it is sensible, how would I go about pulling the accounts from the HOSTING domain to a new domain? ideally, without the need for users to change their passwords?

    Read the article

  • how to go about scaling a web-application ?

    - by phoenix24
    for someone whoes been primarily a web-application developer, and know not much about scaling/scalability techniques. I'll start by stating my application is written in Python, using Django; a fairly standard setup. I currently use Apache 2.2 for my webserver, and MySql for my database server; both running on the same vps server. Up until now, it was basically a prototype and merely 15-30 concurrent users at any given time; so I had no issues, but now since we'll be adding more users we'll have severe performance issues. So my question is how do i go about scaling my web-application? and my plan is as follows. Now I have just one vps server running, apache + mysql. Next, I plan to add another vps server, to run only MySql; so i'll have one web-server and one db server. Next, I'll add Memcache to the webserver for caching data; and taking some load off mysql. Next, another web-server for serving all the static content; Next, a vps server for load-balancing (nginx/varnish) behind which would be my two web-servers and then db-server. Does that sound like a workable strategy, please guide me around here.

    Read the article

  • Best way to integrate applications to windows 7 install.wim image

    - by cyph3r
    I have right now an unmodified .iso of a windows 7 32bit and 64bit installation disk. And I need to integrate to that some applications (office, adobe reader etc) and windows updates so that when windows are installed the above applications/updates are already installed and working. Requirements: My output has to be a install.wim image containing the new/improved windows installation files because the deployment is done via a pxe server and a custom windowsPE enviroment. The procedure to create the install.wim has to be as automatic as possible. I can't create it manually every time I want to incorporate a new windows or application update to the image. The image will be installed on 100+ computers so it needs to be 'generic'. I've never done something like this before but from what I searched a possible solution to this issue would be: To create a reference installation (preferably on a vm so I can take snapshots) complete with its applications/updates/settings. After the complete setup I take a snapshot of the installation Run C:\Windows\System32\sysprep\sysprep.exe /oobe /generalize /shutdown to sysprep the machine. Boot to a WindowsPE enviroment and capture the .wim image using gimagex. Deploy the .wim and enjoy the rapid installation times. :D Does that sound ok? Would you recommend anything else? Right now the applications are installed after the installation of windows is complete. So the total installation time is quite long. That's why I need a different approach.

    Read the article

  • PS3 controller -> PC -> emulators -> TV

    - by abrereton
    I'm researching a media PC for the living room. Playing videos, audio and streaming Internet is straightforward enough. I would also like to run a gaming console system. I was wondering if anyone has any thoughts on this. So far I've discovered that a PS3 controller (thankfully it uses USB and Bluetooth) can be connected to a PC. I've also found that MAME, MESS and PCSX2 are all the emulators I need (I can even emulate a TI-83 calculator with MESS). These emulators can re-map keys, so for example I can make the Nintendo's A button to the PS3 X button, or the SNES key pad could be the PS3 keypad or the analog stick. There are also front-ends to these emulators which can do fancy things like image scaling, anti-aliasing and double-buffering to improve the image quality of an 8-bit Mario on a 50 inch plasma. My set up would be this: PS3 controller connecting over Bluetooth to the PC, PC with Windows, PS3 controller drivers, all my emulators, Network drive with all my ROMs, PC connected to TV via HDMI TV playing Super Mario Kart Does this sound feasible? Does anyone have experience of doing anything like this? Is this a good idea or should I grow up and stop living in the past?

    Read the article

  • How do I collect SNMP readings from intermittently-connected sites?

    - by Luke404
    I am collecting SNMP data on-site for a number of systems, currently using Cacti. These systems are spread on a number of sites that aren't always connected to internet, but I also need to centralize the data on a single system (datacenter housed server) and get graphs out of it. If I directly poll remote systems with a centralized Cacti I'd loose data when a site is not connected to internet. I should record data on-site (I have a server at each site and I can run whatever I want on it) and then 'sync' everything to the central system. One hack could be a cacti or directly an rrdtool on site and then periodically rsync RRD data to the central Cacti system, but that doesn't sound like a 'clean' solution: every RRD would have to be defined at both places and rsync scripts setup with the specific file names. Can you suggest a better solution? Cacti is not a requirement but I'd like to use something like that on the central system. On-site systems need only to collect data I don't need to graph it there or manage users rights to view data and stuff like that, users will only access the centralized system.

    Read the article

  • How CPU communicates with HW

    - by b-gen-jack-o-neill
    Good day. I am new here, but I could not find answer to my question using google, so I help I do not violate any rules. So, basically, all I want to ask is, how CPU comminucates with other HW, such as printers, Graphic card, sound card, LAN card etc. I know, that for basic system I/O, you can use BIOS interrupts. INT 10h I believe is for display output. But, what I would like to know is, what actually happens when you execute instruction int 10h. From desription of int instruction, it should jump to routine, which is stored on adress pointed by adress stored in iterrupt table. But how does this routine get into the RAM? Does BIOS save that routines to the RAM? And what actually that routine does? I mean, CPU can only acess RAM, right? So how can now acess some other HW? Is there some special instrucion for it? Or is CPU somehow connected to BIOS, and than BIOS actually does the work? And the last thing, does even OS like Windows or GNU/Linux use BIOS interrupts, or can OS acess HW directly? Thanks.

    Read the article

  • Outlook 2007 "Mark as Not Junk" Dialog Confusion

    - by David
    Outlook 2007's "Not Junk" button opens the "Mark as Not Junk" dialog. The dialog works correctly if I keep the "Always trust e-mail from <email address>" option checked. That is, the message is removed from the Junk folder and returns to the Inbox. However, if I uncheck the "Always trust" box, pressing OK dismisses the dialog, but nothing else happens. Why not? According to Outlook help, "When you mark a message as not junk, you are given the option of adding the sender or the mailing list name to your Safe Senders List or Safe Recipients List." That sure makes it sound like this is just an option, and not necessary for the core functionality of the action. I really don't want to trust a (possibly forged) From: address, but I do want my mail back in the Inbox. I could manually drag it, but I'm assuming that marking a message as not junk also trains some kind of bayesian filter. Am I mistaken? Thanks.

    Read the article

  • SharePoint extranet security concerns, am I right to be worried?

    - by LukeR
    We are currently running MOSS 2007 internally, and have been doing so for about 12 months with no major issues. There has now been a request from management to provide access from the internet for small groups (initially) which are comprised of members from other Community Organisations like ours. Committees and the like. My first reaction was not joy when presented with this request, however I'd like to make sure the apprehension is warranted. I have read a few docs on TechNet about security hardening with regard to SharePoint, but I'm interested to know what others have done. I've spoken with another organisation who has already implemented something similar, and they have essentially port-forwarded from the internet to their internal production MOSS server. I don't really like the sound of this. Is it adviseable/necessary to run a DMZ type configuration, with a separate web front-end on a contained network segment? Does that even offer me any greater security than their setup? Some of the configurations from a TechNet doc aren't really feasible, given our current network budget. I've already made my concerns known to management, but it appears it will go ahead in some form or another. I'm tempted to run a completely isolated, seperate install just for these types of users. Should I even be concerned about it? Any thoughts, comments would be most welcomed at this point.

    Read the article

  • Authenticate domain-user credentials on unjoined virtual machine?

    - by bwerks
    Hi all, This question may sound silly, and perhaps a bit insane, but--is there any way to run a process on a machine not joined to a domain using credentials from a user in that domain? In my case, I'm running virtual machines installed with release binaries from our build process, as well as Visual Studio. Visual Studio is there to debug our release binaries, however it's being executed with vm-local user credentials. This means that it can't authenticate to our TFS deployment when executing "tf.exe view" to utilize our Source Server for debugging. Team Explorer manages to authenticate to TFS using a UI prompt, however I suspect that it's because we supply it with the TFS deployment's URI, and it's designed to display a prompt to facilitate workgroup scenarios; i.e. it's not like we're getting it for free. My instincts tell me the only way to authenticate on this vm is to join it or somehow form a one-way trust or something, but is there an easier way? For automation we're going to want to script this eventually, but I'm first surveying the feasibility of the thing.

    Read the article

  • This .mpg video clip doesn't play well

    - by Roey
    I've installed K-lite mega codec pack v6.9.0 with playback essentials without player. My default and only media player is windows media player. here are the clip's media info: General Complete name : D:\Users\Roey\Downloads\B384MV.mpg Format : MPEG-PS File size : 273 MiB Duration : 4mn 59s Overall bit rate : 7 643 Kbps Video ID : 224 (0xE0) Format : MPEG Video Format version : Version 2 Format profile : Main@High Format settings, BVOP : No Format settings, Matrix : Default Format settings, GOP : M=1, N=15 Duration : 4mn 57s Bit rate mode : Variable Bit rate : 7 363 Kbps Nominal bit rate : 9 000 Kbps Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate : 25.000 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Compression mode : Lossy Bits/(Pixel*Frame) : 0.142 Stream size : 261 MiB (96%) Audio ID : 192 (0xC0) Format : MPEG Audio Format version : Version 1 Format profile : Layer 3 Mode : Joint stereo Duration : 4mn 59s Bit rate mode : Constant Bit rate : 128 Kbps Channel(s) : 2 channels Sampling rate : 44.1 KHz Compression mode : Lossy Stream size : 4.56 MiB (2%) Menu When I play it there is no sound (just a little "kahhhh" noise every 10-20 seconds) and the frames are moving very slow - it "jumps" frames. A blue tray icon [FFa] "ffdshow audio decoder" pops with the following details: Input:MP3, stereo, 44100 Hz (libavocodec) Output:PCM, stereo, 44100 Hz, 16-bit integer Any help will be much appreciated. Thanks

    Read the article

  • XenServer 5.5 running WHS.. trying to add local or network printer/scanner/copier

    - by ProstheticHead
    Hi guys. Just wondering if anyone has prior experience in sharing a multifunction printer across a (mostly windows) network? My situation at the minute is complicated.. I DID have the printer attached directly to a Windows Home Server box and was able to scan to a share and print across the network from my other computers. Compatibility problems have forced me to virtualize WHS on XenServer 5.5... This is actually quite useful because I can now run other things on the same box, but the problem is that now WHS doesn't get direct hardware access so it doesn't see a USB attached PSC... Grrrr! So now I have a choice to make. I've read somewhere that I can buy an add in PCI USB controller and somehow set up a passthrough to one VM at a time. To me, this sound complicated but if it's likely to work reliably I'd prefer this method. I've read about another approach, which I'm not sure about either, but I guess sounds plausible. A Network USB server, (NOT a print server) that can somehow make a USB port accessible across a network. My worry here is that it likely needs some kind of 3rd party software to work.. so not ideal. If there are any other methods you can suggest I'd be happy to hear them... I need your help guys. I'm also in the market for a PCI express SATA controller, nothing flashy, just need up to 8 ports, JBOD and 100% compatability.. Any suggestions? Regards Kevin

    Read the article

  • Setting up MySQL Linux slave with a Windows master

    - by philwilks
    I'm running a MySQL 5.0 database server on Windows Server 2008. The total size of the database is about 1Gb. I make daily backups, but I'd like to step up to having a slave server for extra protection. My thinking was that I wouldn't need the expense of a Windows machine to do this, and a Linux "cloud server" from RackSpace would do the job well for quite a low cost. However I have little experience with Linux, so I have a few questions... Does this sound like a good idea? Is there anything wrong with linking Windows and Linux MySQL servers? Does Linux have the equivalent of Remote Desktop Connection? If so can I use this from a Windows machine? Would a particular Linux distro be well suited to this task? RackSpace offer ArchLinux, CentOS, Debian, Fedora and Ubuntu. My immediate thinking is to go with Ubuntu as I've heard it's more friendly for people coming from a Windows background. Any comments you have would be very appreciated! Phil

    Read the article

  • PC power supply & normal range for voltages reported in BIOS hardware monitor?

    - by Chris W. Rea
    I'm trying to diagnose whether my computer has an ample power supply. Sometimes when I play a video-intensive game, both monitors lose the video signal, even though the computer remains on and sound playing. A theory I have is: the video card isn't getting sufficient power. I can't imagine it's overheating because the machine is well-ventilated and the video card isn't hot to touch when this happens. Anyway, in my PC's BIOS there's a Hardware Monitor page, and among other voltages reported (such as CPU, DRAM, South Bridge, etc.) I can see the following values: 3.3V 3.152V 5V 4.944V 12V 11.872V Are those the voltages used by peripherals? What voltage should I be referencing if I want to know what my video card (PCI Express) is consuming? What is the normal range of values reported for those? My values above appear to be under by approximately 4.5%, 1.1%, and 1.1% respectively. Is that cause for concern? How else should I be determining if my power supply is "right-sized" for my PC and video card, or am I perhaps barking up the wrong tree?

    Read the article

  • Is this hard disk dead?

    - by Korjavin Ivan
    Not sure, is this right site for this Q, but let me try Last time i have problem with hard disk. Sometimes its do strange sound, and i get it from logs: $dmesg | grep ata4 [29409.945516] ata4.00: exception Emask 0x10 SAct 0xf SErr 0x90202 action 0xe frozen [29409.945529] ata4.00: irq_stat 0x00400000, PHY RDY changed [29409.945538] ata4: SError: { RecovComm Persist PHYRdyChg 10B8B } [29409.945546] ata4.00: failed command: READ FPDMA QUEUED [29409.945562] ata4.00: cmd 60/30:00:56:22:5f/00:00:00:00:00/40 tag 0 ncq 24576 in [29409.945573] ata4.00: status: { DRDY } [29409.945580] ata4.00: failed command: READ FPDMA QUEUED [29409.945594] ata4.00: cmd 60/18:08:8e:22:5f/00:00:00:00:00/40 tag 1 ncq 12288 in [29409.945605] ata4.00: status: { DRDY } [29409.945611] ata4.00: failed command: READ FPDMA QUEUED [29409.945625] ata4.00: cmd 60/08:10:46:02:66/00:00:00:00:00/40 tag 2 ncq 4096 in [29409.945635] ata4.00: status: { DRDY } [29409.945641] ata4.00: failed command: READ FPDMA QUEUED [29409.945656] ata4.00: cmd 60/80:18:ee:04:66/00:00:00:00:00/40 tag 3 ncq 65536 in [29409.945666] ata4.00: status: { DRDY } [29409.945679] ata4: hard resetting link [29413.976083] ata4: softreset failed (device not ready) [29413.976097] ata4: applying SB600 PMP SRST workaround and retrying [29414.148070] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [29414.184986] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243280] ata4.00: SB600 AHCI: limiting to 255 sectors per cmd [29414.243292] ata4.00: configured for UDMA/133 [29414.243324] ata4: EH complete [680674.804563] ata4: exception Emask 0x50 SAct 0x0 SErr 0x90a02 action 0xe frozen [680674.804575] ata4: irq_stat 0x00400000, PHY RDY changed [680674.804584] ata4: SError: { RecovComm Persist HostInt PHYRdyChg 10B8B } [680674.804603] ata4: hard resetting link [680678.840561] ata4: softreset failed (device not ready) Is this ata4 sata hard drive dead? Must i change it ASAP ? Need I specify more info?

    Read the article

  • Interruptionless Uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Linux/hostapd: AP can ping clients, clients can access internet, can't access www@wlan1 with more than 5-6 packets at once

    - by mhambra
    Please edit the title, can't make it sound better. -- OP. Hi all, I have a Wifi USB dongle in a PC, that serves as an AP for laptop. wlan1: 192.168.2.1, netmask 255.255.255.0, routed: route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.2.1 ping 192.168.2.2 (laptop): ping was ok for lot of packets. Now, I try to access 192.168.2.1:80/myindex.html (apache) from laptop, and can see that own 1kb test page. But, trying to access 192.168.2.1:80/my.jpg, I see the following: GET /my.jpg HTTP/1.1 200 OK <jpg header, about a kilobyte> <TCP packet retransmisson> <TCP packet retransmisson> <end of stream> It seems to be a hostapd's problem (networked stuff worked fine with Ad-Hoc), but it may be also forwarding/routing problem too. What to google for? Even more strange, SSH to that host works fine.

    Read the article

  • Broad Band LAN connection through existing 6 wire phone line ( no phone connected )

    - by Paul Taylor
    I have an (up to but never achieved ) 10 mb broadband signal coming into my house along the telephone line. The modem then connects the broadband signal via a LAN connection and Wi-Fi signal to my computers and iPad. My workshop desk is 54 yards (approx. 50m) downhill from the modem (part of a separate building) – too far to give a good direct signal. The inside corner of the workshop (by a window) receives a weak signal. I have a disconnected telephone cable consisting of 6 wires going from the house to the workshop. The telephone is no longer used in the workshop - we use our mobile number for business calls. A broad band signal using the cable would not have to share with a phone. What would be the most economic price/efficient way to get the broadband signal to the workshop? I am writing in hope that since the telephone didn't need the six wires, an Ethernet connection might be similar and not need all the eight wires for a LAN connection. In theory could use the telephone wire to pull an Ethernet cable trough the underground pipe but I doubt if the cable would survive the strain. The broad band connection in the workshop does not have to have network facility. My skill level is high enough to do house wiring, plumbing and gas fitting; so given any sound advice and a wiring diagram or instructions I can probably work out the rest myself.

    Read the article

  • Is a Hyperthreaded CPU more powerful and more efficient than a Dual-core CPU? [closed]

    - by user1811864
    which computer to choose with Pentium processor hello they are getting rid of the old computer equipment in the office and i have to choose the computer to take home i get first choice to pick. -15 inch lcd screen 4 gb of ram core 2 duo dual Core E8400 3.00 GHz dvd writer windows vista/ linux -15 inch crt monitor with 2 gb ram and pentium 4 2 ghz single core HT technology windows xp hardisks both 250 GB my friend is telling me to choose the second one Pentium single core HT because he told me it runs faster becuase of HT technology and cooler and consumes less current electricity so it wont get overheated because it has HT technology so it's high definition for encoding and watching HD movies and HD sound and is like a gaming pc to play internet games. And also he said the dual core 8400 runs at 3 ghz compared to the 2 ghz so it heats very much because of the two extra cores so it takes more current raising electricty bills and is not good for gaming and watching HD movies and internet flash animations and games because of getting heated everytime. And he wants to choose and take the E8400 because he has air conditioning at home so it will be safe from heating. So which one computer should i take is it really faster because of the HT High definition technology and will i be able to play internet flash card games better and watch good HD movies Youtube etc and play all the music and songs.

    Read the article

  • Mac Finder: WD My Passport won't mount

    - by Matt
    I really need your help. I have a WD My Passport 650GB (with Firewire and USB). I'm using it for almost a year now and it always worked fine. While underway I simply plug it in via Firewire - at home I connect it to my Airport Extreme to have it available as a network storage. Today I connected the hd to my macbookpro (via firewire) and NOTHING. The hd is starting (clearly making a sound and the power-indicator is flashing) but it won't appear in Finder. I also tried it with USB - no sign. I ran Disk Utility and tried to repair the disk. At first try I got a red error line saying that something is wrong with the "headers". However the repair completed with a success message saying that everything is ok. I also verified the hd. Also with a success message. I did that a few times again and unplugged it in between. Never got the error with the headers again - it's always completing and saying everything is ok. However I can't mount the drive. That is what Disk Utility is showing. Any ideas on that? I really need the files on that hd. thank you in advance!

    Read the article

  • How to run Fujitsu P27T-7 LED monitor in its not native resolution and have perfect fonts rendering

    - by Ilia Rostovtsev
    My problem is completely opposite to anything I could find as I need to run my monitor in its NOT native resolution and have perfect font rendering. I recently got myself Ultra HD 2560x1440 27 inch monitor (Fujitsu P27T-7 LED) and I have an issue with this. I would call it personal but I'm afraid it's not as few people already agreed with me. I do programming and the text on UHD is way to small for comfortable usage. I changed the resolution to regular Full HD (1920x1080), it became just right but the text is looking slightly blur now, in comparison to both its natural UHD resolution and/or to my old 23 inch NEC. I am pretty frustrated and not sure what to do and how to make fonts look just as sleek as they should? I can't work in UHD resolution (my vision is 100% perfect), simply if calculated, picture size with Ultra HD (2560x1440) on 27 inch is around 30% smaller than Full HD (1920x1080) on 23 inch. In order to have same font size, if compared with Full HD 23 inch, 27 inch Ultra HD monitor must be around 32 inches in size. If I set my new monitor to regular Full HD 1920x1080, then the fonts' size are just perfect but the quality is not as it's blurry? Could anyone please help me out with an advise of how to solve this problem? Spec: nVidia 560 Ti with DVI-D port on Fedora 20. EDIT 1: Changing fonts doesn't really help as everything else doesn't look the way it should. EDIT 2: The monitor is buzzing on 2560x1440 so badly in case there are lots of lines on the screen, like file listing. If I type ls /usr/bin it makes such nasty irritating sound. When resolution goes to 1920x1080 it's a bit better. Any idea why?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >