Search Results

Search found 278 results on 12 pages for 'adrian muraru'.

Page 4/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to reference a Domain Controller out of the Local Network?

    - by Adrian
    We have multiple servers scattered over different hosting providers. For learning, experimenting and, ultimately, production purposes, I set one of them as a Domain Controller. That went well, most of our services are now authenticating via AD, which helps us a lot. What I want to do now is to simplify the authentication for the multiple servers, by making each of them look at the Domain Controller. This way, our Devs can log into (Remote Desktop) the multiple servers with the same credentials from AD. I know I have to configure each server to look at the Domain Controller. But when I try to add the Domain Controller to the Computer, it cannot find it, although the Domain Controller address is a valid, reachable internet sub-domain (as in "ad.ourcompany.com"). This is the detailed error message: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain ad.ourcompany.com: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.ourcompany.com Common causes of this error include the following: - The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 109.188.207.9 109.188.207.10 - One or more of the following zones do not include delegation to its child zone: ad.ourcompany.com ourcompany.com com . (the root zone) For information about correcting this problem, click Help. What am I missing? I'm an experienced Dev, but a newbie Sysdamin experimenting with new stuff. Disclaimer All IP addresses and domains/subdomains were changed to preserve security. If by any chance you still can see private information, please let me know so that I can change it.

    Read the article

  • DFS keeps constantly replicating almost all files

    - by Adrian Godong
    We have always had problems with DFS, but recently it has gotten worse (with no apparent reason) to the point it's becoming harmful. We have one master server and DFS connections to other four servers. The four severs don't modify any files, so all replications always propagate from the master to the four other servers. The replicated directory has about 900,000 files. In the recent weeks, every time we check DFS, the DSF backlogs have hundredths of thousand of files. For instance now, the master server now replicates about 700,000 to three of the four servers while the fourth one is fine. Sometimes, only one is off, sometimes two and this time three. Also, it is never the same set of servers. It is inconceivable that something periodically touches all 900,000 files. The biggest change which happens is a scheduled update of several thousand files every six hours. Does anybody have the same problem? Is it a known issue?

    Read the article

  • Network Restructure Method for Double-NAT network

    - by Adrian
    Due to a series of poor network design decisions (mostly) made many years ago in order to save a few bucks here and there, I have a network that is decidedly sub-optimally architected. I'm looking for suggestions to improve this less-than-pleasant situation. We're a non-profit with a Linux-based IT department and a limited budget. (Note: None of the Windows equipment we have runs does anything that talks to the Internet nor do we have any Windows admins on staff.) Key points: We have a main office and about 12 remote sites that essentially double NAT their subnets with physically-segregated switches. (No VLANing and limited ability to do so with current switches) These locations have a "DMZ" subnet that are NAT'd on an identically assigned 10.0.0/24 subnet at each site. These subnets cannot talk to DMZs at any other location because we don't route them anywhere except between server and adjacent "firewall". Some of these locations have multiple ISP connections (T1, Cable, and/or DSLs) that we manually route using IP Tools in Linux. These firewalls all run on the (10.0.0/24) network and are mostly "pro-sumer" grade firewalls (Linksys, Netgear, etc.) or ISP-provided DSL modems. Connecting these firewalls (via simple unmanaged switches) is one or more servers that must be publically-accessible. Connected to the main office's 10.0.0/24 subnet are servers for email, tele-commuter VPN, remote office VPN server, primary router to the internal 192.168/24 subnets. These have to be access from specific ISP connections based on traffic type and connection source. All our routing is done manually or with OpenVPN route statements Inter-office traffic goes through the OpenVPN service in the main 'Router' server which has it's own NAT'ing involved. Remote sites only have one server installed at each site and cannot afford multiple servers due to budget constraints. These servers are all LTSP servers several 5-20 terminals. The 192.168.2/24 and 192.168.3/24 subnets are mostly but NOT entirely on Cisco 2960 switches that can do VLAN. The remainder are DLink DGS-1248 switches that I am not sure I trust well enough to use with VLANs. There is also some remaining internal concern about VLANs since only the senior networking staff person understands how it works. All regular internet traffic goes through the CentOS 5 router server which in turns NATs the 192.168/24 subnets to the 10.0.0.0/24 subnets according to the manually-configured routing rules that we use to point outbound traffic to the proper internet connection based on '-host' routing statements. I want to simplify this and ready All Of The Things for ESXi virtualization, including these public-facing services. Is there a no- or low-cost solution that would get rid of the Double-NAT and restore a little sanity to this mess so that my future replacement doesn't hunt me down? Basic Diagram for the main office: These are my goals: Public-facing Servers with interfaces on that middle 10.0.0/24 network to be moved in to 192.168.2/24 subnet on ESXi servers. Get rid of the double NAT and get our entire network on one single subnet. My understanding is that this is something we'll need to do under IPv6 anyway, but I think this mess is standing in the way.

    Read the article

  • Everyone can access my Windows 7 Homegroup file shares - Even Windows XP computers

    - by Adrian Grigore
    I have 3 computers in my network, two running Windows 7 and one running Windows XP. I've set up a homegroup on both Windows 7 computers. Also, all computers are in the same Workgroup. The problem is that one of the Windows 7 computers makes all shares accessible to the entire Workgroup instead of just sharing to the Homegroup as it should be. I created the file share in Windows 7 via right-click in the explorer, then click on "Share For" - "Homegroup (Read/Write)" (translated from German, so the actual wording may be different). Also, when I look at the file sharing properties of that folder, Windows Explorer informs me that Users must have a valid account and password for this Computer to access drive shares. Unfortunately this is not true. Being in the same Workgroup is enough to get access. Homegroup restrictions work as expected on my other Windows 7 computer. When trying to browse those shares from the XP computer, I get a dialog asking for a login and password. What might cause homegroup restrictions to fail and how can I fix this?

    Read the article

  • Have VIM jump to a ctag in an existing tab

    - by Adrian Petrescu
    I have ctags configured with my vim installation. My habit is to usually have all of the relevant files I'm working on open in tabs in vim all at once. The "problem" is that if I use Ctrl+] to jump to a ctag in a file I'm editing, it will replace the buffer in that tab, even though I have another tab already open containing that symbol. It would be much better if it just switched to that tab and jumped to the symbol there instead. This way I would always have a 1-to-1 tab-to-file ratio. I noticed in the Changenotes for the taglist.vim plugin (which I also use) has an entry that says 1. Added support for jumping to a tag/file in a new or existing tab from the taglist window (works only with Vim7 and above). However, I couldn't find anything in the documentation for Taglist (or Ctags) about how to actually do this. Can any vim gurus fill me in? Thanks!

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • How can I keep track of SQL Server cummulative updates?

    - by Adrian Grigore
    Hi, If I am not mistaken, SQL server cannot be automatically updated via the regular windows backup routine. Instead, there are cummulative updates that need to be installed by hand. I assume this is done for security and stability reasons. Is this correct? If so, how can I keep track of new updates without regularly reading SQL server related blogs? Is there any low-volume newsletter I can subscribe (ideally only announcing critical updates)?

    Read the article

  • Yahoo Messenger IP range

    - by Adrian
    I use PeerBlock (former PeerGuardian) and, as a consequence, Yahoo Messenger (actually Pidgin) fails to connect every once in a while; PeerBlock reports the access being blocked because the destination IP is in one of the block lists. Where can I get a list of all IP ranges belonging to Yahoo Messenger so I can configure an "allow" rule in PeerBlock?

    Read the article

  • How do I get a file type to show up with a name I choose in Windows Explorer?

    - by Adrian
    I associated a file extension using the command assoc. But in the Explorer, it lists the type as the extension name. I.e. assoc .sh=ShellScript will still cause explorer to show the type as SH File. Anyway to change it so it shows up as ShellScript or better yet, Shell Script? EDIT: Using assoc didn't work. Seems to be something wrong with my registry. I figured that using quotes would put in a white space, but because it didn't show up in the explorer, I figured it may have been part of the problem.

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • No keyboard works

    - by Adrian Mester
    I'm trying to troubleshoot a problem a friend is having. Her keyboard suddenly stopped working. She had a wireless keyboard/mouse combo and the mouse still worked. She tried with a different usb keyboard, and it still didn't work. She tried with a PS/2 keyboard and it didn't work. She's running Windows 7, but the keyboard also doesn't work during boot-up. I've got no idea what could possibly be the problem. Any suggestions?

    Read the article

  • Does sending e-mail in the name of customers increase the risk of being marked as spammer?

    - by Adrian Grigore
    Hi, We are developing a SaaS website application that lets users send invoices to their clients. Ideally, these e-mails should appear to be originating from our customers, so the sender e-mail address domain will not match the reverse IP entry for our server. In effect we would be forging their e-mail address, but of course with their consent. Will that result in a higher probability of being marked as a spammer / their e-mails being marked as spam? If yes, how bad is the penalty? And what about people who have an e-mail address originating form an SPF-enabled domain? I guess it should be the majority of the big e-mail providers.

    Read the article

  • Roaming Profiles, Folder redirection or... both

    - by Adrian Perez
    Hello, i'm developing a remote desktop services in w2008r2. Now, it's going to be a server, but in the future it's possible that another server could be added to the farm. Now, i'm creating roaming profiles and folder redirection to save space. Now, i have some doubts... if i'm redirecting all the folders i can do through gpo (start menu, desktop, appdata, My Documents, Videos, Music...), does it make sense to use roaming profiles? I mean, i'm redirecting almost everything. So, if i don't use roaming profiles, what kind of data is not shared/roamed? Perhaps is not necessary and if i set roaming profiles, i will add more unnecessary complexity to the infraestructure. What do you think about? Some advice or recomendation? Thanks!

    Read the article

  • Hang while starting several daemons [solved]

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before. Update3: I found the problem. My /etc/nsswitch.conf specified ldap as hosts lookup backup, which is not available at that time of the boot. Relying on dns solely fixes my boot problems.

    Read the article

  • ESXi Guests will not boot on IBM x3550 M3

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. VM Host Server: IBM x3550 M3 7944AC1 server w/ 2x Intel Xeon E5607 2.27Ghz CPUs ESXi 5.0.0 Build 623860, built for IBM Hardware downloaded from IBM Storage: 2x500GB SAS local storage 8GB RAM Vt is verified to be ENABLED Server Health Status: Normal The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspucious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space.

    Read the article

  • Bash: create anonymous fifo

    - by Adrian Panasiuk
    We all know mkfifo and pipelines. The first one creates a named pipe, thus one has to select a name, most likely with mktemp and later remember to unlink. The other creates an anonymous pipe, no hassle with names and removal, but the ends of the pipe get tied to the commands in the pipeline, it isn't really convenient to somehow get a grip of the file descriptors and use them in the rest of the script. In a compiled program, I would just do ret=pipe(filedes); in Bash there is exec 5<>file so one would expect something like "exec 5<> -" or "pipe <5 >6" -is there something like that in Bash?

    Read the article

  • Clonezilla is not able to clone a RAID 1 disk

    - by Adrian
    I have a HP Server DL320 G5. There are two SATA hard disks configured as RAID 1 through HP embedded RAID controller. Server OS is running GNU/Linux (Fedora) Server booted up with clonezilla live CD. The image will be stored on a NAS connected through NFS. Clonezilla could mount the NFS share and could see the two hard disks /dev/sda and /dev/sdb. I selected /dev/sda for disk cloning. However I could not see the cloning progress and got straight into a prompt for reboot, poweroff, command line I tried to select /dev/sdb but the same issue.

    Read the article

  • Tomcat memory usage

    - by Adrian Mester
    I'm running tomcat on a ubuntu 10.4 VPS with 512MB of RAM (1024 burstable). I'm using it for development, so performance isn't an issue, but memory is. Tomcat is currently using about 250MB without any apps installed (I compared memory usage with tomcat stopped and running), and I also need to run lighttpd and mysql. Is there any way to get that number down? I don't need it to be able to handle a large number of requests at once.

    Read the article

  • Varnish + Tomcat vs Apache + mod_jk + Tomcat

    - by Adrian Ber
    Does anyone have some comparison data in terms of performance for using in front of Tomcat either Varnish or Apache with mod_jk. I know that AJP connector suppose to be faster than HTTP, but I was thinking that in combination Varnish which is lighter and highly optimized could perform better. There is also the discussion between static resources (which I think will perform faster with Varnish than Apache, even with mod_cache) and dynamic pages.

    Read the article

  • VPN, routing, specified application

    - by Adrian
    Details: eth0 = current internet port pptp1 = VPN connection, if I connect to my provider, he give me an IP address, which is accessible from the internet. This is what I need. I want to connect through this IP back to my PC. I want to keep my primary internet connection (eth0) on my PC for all traffic, but route traffic to VPN for specified application/or port, to access application/port from the IP, which I given from the pptp provider. Huhh? Difficult but, it is possible? If yes, how? Incoming port will be always: 33340 Outgoing port can be change, but usually it is 33330

    Read the article

  • Adding a GET parameter to URL causes 404 error

    - by Adrian Grigore
    I'm trying to install the syntaxhighlightter evolved plugin to my wordpress blog. I've uploaded and activated the plugin, but it did not work. I've looked into the page source code and found out that the plugin style is loaded from the following URL: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css?ver=2.0.320 This causes a 404 error (page not found). The strange thing though is that when I remove the GET parameters, the CSS loads ok: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css What could be causing this problem and how can I fix this? Unfortunately I don't know how to make wordpress drop the GET parameters when loading the stylesheet. EDIT: As I just found out, this happens only in Firefox (3.0.11). IE loads both URLs above just fine. Not that this would be of any help though, so any suggestions would be appreciated. SECOND EDIT: I tried this on my laptop and it works fine with Firefox 3.08. So this really seems to be a browser problem after all.

    Read the article

  • Windows 7 Printing Issue

    - by Adrian Godong
    I am using Windows 7 RTM x64. From Control Panel Devices and Printers, I have three printers listed; Fax, XPS Writer, and a Lexmark. I can print a test page through the printer properties with no problem. I can print a text file from Notepad with no problem. I can't print from Safari. When I press Ctrl+P, it displays the Print dialog, press OK and nothing happened. I can't print from Adobe Reader. When I press Ctrl+P, it complains that it there is no printer installed. I can't print from Office applications. When I press Ctrl+P, it crashes immediately. Running Office Diagnostics does not help. I can't print from IE8. When I press Ctrl+P, it displays the Print dialog, complains that I have to select a printer from the list, selected any of the three printers, the Print button is disabled. Any help? Update (01/11/2009): The default printer is the Lexmark. I'm testing on this one as well. I was about to reinstall Office (as this is the first application that has the problem), but then I tried other, some behave similarly but not identical (maybe caused by different printing implementation). On those applications that is able to display printer selection dialog, I tried the Lexmark and XPS. Neither printed anything (paper for Lexmark, file for XPS). Update (01/12/2009): It seems that my Windows installation is botched. A colleague have similar hardware/software combination (it's the same workstation model and Windows 7 x64) and his can print perfectly fine. I tried adding the printer from his share, no joy. I can test print from the printer property, I can print from Notepad, but not from any other application.

    Read the article

  • Xenserver boot error

    - by Adrian
    I'm trying to get Xenserver 5.5 running on a spare computer here, hardware specs: Intel Q6600, 4GB Ram, and Gigabyte GA-P35-DS3R motherboard Xenserver itself installs fine onto a 150GB sata hdd, however it fails to boot whatsoever, giving this garbled mess: http://img697.imageshack.us/img697/9918/biosi.jpg it's not frozen because if I press enter it just prints a different garble and it also says "could not find kernel image". The strangest thing is if I put that hdd in my desktop and assign it to a VMWare desktop vm (under the ESX profile no less) it boots perfectly... leading me to believe there are no problems with the install or the hdd itself. From what I can tell the error seems to be occuring completely seperately to Xenserver, in the bootloader extlinux?. If there was a motherboard compatibility issue I would think it would also have manifested during installation, and the fact the problem seems to be with the booting into Xen makes me doubt this. Any ideas guys? (I'm using Xen because it can do PCI passthrough without VT-d.)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >