Search Results

Search found 13575 results on 543 pages for 'desktop virtualization'.

Page 518/543 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • What are incentives (if any) to use WinRT instead of .Net?

    - by Ark-kun
    Let's compare WinRT with .Net .Net .Net is the 13+ years evolution of COM. Three main parts of .Net are execution environment, standard libraries and supported languages. CLR is the native-code execution environment based on COM .Net Framework has a big set of standard libraries (implemented using managed and native code) that can be used from all .Net languages. There are .Net classes that allow using OS APIs. WPF or Silverlight provide a XAML-based UI framework .Net can be used with C++, C#, Javascript, Python, Ruby, VB, LISP, Scheme and many other languages. C++/.Net is a variation of the C++ language that allows interaction with .Net objects. .Net supports inheritance, generics, operator and method overloading and many other features. .Net allows creating apps that run on Windows (XP, 7, 8 Pro (Desktop and Metro), RT, CE, etc), Mac OS, Linux (+ other *nix); iOS, Android, Windows Phone (7, 8); Internet Explorer, Chrome, Firefox; XBox 360, Playstation Suite; raw microprocessors. There is support for creating games (2D/3D) using any managed language or C++. Created by Developer Division WinRT WinRT is based on COM. Three main parts of WinRT are execution environment, standard libraries and supported languages. WinRT has a native-code execution environment based on COM WinRT has a set of standard libraries that more or less can be used from WinRT languages. There are WinRT classes that allow using OS APIs. Unnamed Silverlight clone provides a XAML-based UI framework WinRT can be used with C++, C#, Javascript, VB. C++/CX is a variation of the C++ language that allows interaction with WinRT objects. Custom WinRT components don't support inheritance (classes must be sealed), generics, operator overloading and many other features. WinRT allows creating apps that run on Windows 8 Pro and RT (Metro only); Windows Phone 8 (limited). There is support for creating games (2D/3D) using C++ only. Ordered by Windows Team I think that all the aspects except the last ones are very important for developers. On the other hand it seems that the most important aspect for Microsoft is the last one. So, given the above comparison of conceptually identical technologies, what are incentives (if any) to use WinRT instead of .Net?

    Read the article

  • Apache won't follow Symlink

    - by Marvin Dickhaus
    I have a LAMP server (Ubuntu 12.10) setup on my development machine. It is a T60 modified with an SSD. The server base is in /var/www. Apache has the following config: DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews SymLinksIfOwnerMatch AllowOverride all Order allow,deny allow from all </Directory> I'm currently developing a SilverStripe CMS featured site. The folder for the server is /var/www/sfk/. The framework and all cms relavant features are in their respective folders. The only folder that need to be modified would be the /var/www/sfk/mysite folder. Because of that I want to keep the mysite folder under my home directory and symlink it into the server folder. So here is what I've done: ln -s ~/sfk/mysite/ /var/www/sfk/ sudo chgrp www-data /var/www/sfk/mysite -R ls tells me the following: /var/www/sfk (exerpt) drwxr-xr-x 3 marvin www-data 4096 Nov 16 16:53 assets drwxr-xr-x 12 marvin www-data 4096 Nov 16 16:53 cms drwxr-xr-x 29 marvin www-data 4096 Nov 16 16:53 framework -rw-r--r-- 1 marvin www-data 2410 Nov 16 16:53 index.php lrwxrwxrwx 1 marvin www-data 24 Nov 20 17:45 mysite -> /home/marvin/sfk/mysite/ -rw-rw-r-- 1 marvin www-data 514 Nov 16 16:55 _ss_environment.php drwxr-xr-x 4 marvin www-data 4096 Nov 16 16:53 themes and ls /var/www/sfk/mysite/ drwxrwxr-x 6 marvin www-data 4096 Nov 16 00:15 code drwxrwxr-x 2 marvin www-data 4096 Nov 16 11:51 _config -rwxrwxr-x 1 marvin www-data 2685 Nov 16 15:39 _config.php drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 css drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 images drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 javascript drwxrwxr-x 5 marvin www-data 4096 Nov 16 00:15 templates This is literally the same setup I have on my desktop machine. The problem I have is that the mysite/ folder is just not recognized. I'm thankful for every advice I get. I'm frustrated because I'm stuck with this issue for hours.

    Read the article

  • Problem with several USB devices on Windows XP. How to diagnose USB device?

    - by Lukasz Baran
    Hello All, Recently I've bought some electronic devices (like e.g. Sony Ebook reader PRS-600 or HP PDA IPaq) which are connected to PC using USB ports. These devices have own batteries and they are recharged using USB. However, I am experiencing a problem with Ebook Reader and the similiar thing happens with my PDA (but in this case it's more rare). Sometimes, I am unable to make it start recharging, when it's connected to USB port. The LCD diode on my Reader lights up and the icon in the system tray appears. They both indicate that the device has been connected. However, the device should start recharching and it should appear on Windows XP as a 'explorable' device (just like pendrive it offers some HDDs). None of these happens:( This problem appears on two PCs - desktop and laptop. Both have Windows XP SP3 installed. And what's interesting and surprising to me is the fact that these devices sometimes work without any problems. I mean, I just plug them in and they work properly. Besides that, there are some computers (I have tested a variety of possible conditions) on which the problem does not appear! I am confused and, on the other hand, determined to find out what is the real cause of such strange behavior. Maybe I'm wrong but I always assumed that I'm using deterministic machines, but the described situation makes me feel like I was dreaming:) And that's why I'm asking you here. Are there any tools that could help me diagnose USB ports? Or maybe anyone knows the solution? I have a lot experience with low level programming (mostly from the times of MS-DOS applications), but I'm not an expert when it comes to WinXP technology. If this was a problem with network connection, I would know what to do - I know how to track network traffic, but with USB ports I feel powerless. I have a feeling that the problem may lie in a way WinXP handles USB devices. Personally, I hate auto-discovery and P'n'P features available in Windows. I know that some of may suggest to move to Linux or other *Nix system. Unfortunately, I have to use WinXP in my professional life, so it's rather impossible. Besides that, I don't consider XP as a total disaster. Any help from you will be appreciated!

    Read the article

  • Suspected network performance issue on VirtualBox Ubuntu guest on Win7 host

    - by Adam
    I set up Ubuntu 12.04 in VirtualBox on the Win7 machine I was allocated on my new project. I am running Java, Eclipse, Tomcat to develop a large data-intensive application and I noticed that this application runs at half the speed of my colleague's identical machine, where he runs it all under Windows. I think I have narrowed down the performance issue to the network, after comparing and equalising all the Java VM settings with my colleague. Is there a ping test I can do or some other network diagnostic test to flag up any problems? To give some background, the network performance is confusing. Running a network speed test to my colleague's machine with iperf shows speeds of 6 Mb/s from my Ubuntu guest, and 90 Mb/s from the win7 host. Large downloads, e.g. the Java SDK, come down at about 1.2 MB/s on both the guest and the host. Pings are sub-1ms on the host, but 1.5ms on the guest. I also did a broadband speed test, and got 10Mb/s download speed on both, but the host has an upload speed of 10Mb/s but the guest only uploads at 3Mb/s. I've been trying to diagnose any MTU problems with ping -M do to identify any kind of packet fragmentation problem but it's progressing very slow because I don't have much experience in this area. From what I read on other people's networking issues with VB and Linux guests on Win7 hosts, I should be able to get the speed on the guest up to the same level as the host. I installed a fresh VM with Ubuntu again to see if I'd foobar'd it somehow, but I'm getting the same readings with iperf on the virgin installation. My setup is: Adapter 1: Intel PRO/1000 MT Desktop (NAT) Adapter 2: ditto (host-only adapter) eth0 Link encap:Ethernet HWaddr 08:00:27:0b:76:bf inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe0b:76bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:86236 errors:0 dropped:0 overruns:0 frame:0 TX packets:49369 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:69163946 (69.1 MB) TX bytes:3530535 (3.5 MB) eth2 Link encap:Ethernet HWaddr 08:00:27:a3:26:b8 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:26b8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59 errors:0 dropped:0 overruns:0 frame:0 TX packets:57 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9148 (9.1 KB) TX bytes:7648 (7.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:701 errors:0 dropped:0 overruns:0 frame:0 TX packets:701 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:66321 (66.3 KB) TX bytes:66321 (66.3 KB)

    Read the article

  • How to temporary disable a mirror video driver in windows xp registry

    - by happy clicker
    Because its a lot of text, I will ask first my question and then explain what the base problem is. Perhaps someone can give me a solution to the base problem: Is there is a way to temporary disable mirror video drivers (through registry or so), without uninstalling the corresponding software. I tested changing the enumeration in LocalMachine\Hardware\DeviceMap\Video but after reboot always the old configuration is restored. Explanation of the base problem We are working on a wpf-project for a department of a big company. There we have the problem that WPF renders only in software mode, although the hardware they have, must support hardware rendering (Tier 2). After searching for a solution to the problem, we found out that direct 3d does not work properly and we think thats why WPF can only use SW-rendering. In dxdiag.exe the direct3d-acceleration is enabled, but if we start the test-routine it always fails saying that it has not enough memory (it says memory, not video memory!). I have seen there 3 different types of pc’s (they have some hundreds of each type) and every type shows the exactly same behavior. We tried to update all the drivers, also dx (Version 9.0c) and we searched a lot in the web but could not find a solution. All the pcs have Intel Dual-Core processors or better, one type has an Intel gma 9000 graphics card the other two types have actual ATI and NVidia graphic-cards with 256MB onboard memory. Also the system memory is at least 2GB. Windows is XPSP3. The pc’s are of two different manufacturers. Because we see the exactly same behavior on every computer of this three very different computer-types, we don’t think that this is a driver or a direct x problem. What we’ve found in other newsgroups is, that direct x could be disturbed through mirror-video drivers such as NetMeeting, VNC and other remote desktop-installations. In the registry, we see under LocalMachine\Hardware\DeviceMap\Video a lot of such mirror-entries and we find also the definitions in the CurrentControlSet\Control\Video-Section (However this drivers are not shown in the hardware panel of the os). We can have admin-rights to one of these computers to test if disabling these drivers would help, but we must not change the configuration so that some software does not work after the tests. Therefore I cannot uninstall any software because I have not the mediums, licenses or knowhow to reinstall those apps. The support of this company however will only begin to work, if I can tell them what the real problem is. Thats why we search for a way to disable these mirror-drivers (or a hint to solve the dx problem if we are on a false trace)

    Read the article

  • Can this USB3 behaviour be anything else than a hardware failure?

    - by Jonas Wielicki
    While my motherboard is half a year old now (ASUS M5A99X EVO), I only recently made use of the USB3 boards (after purchase of USB3 external harddrive). However, I am encountering issues. I am running linux 3.6.7-4.fc16.x86_64. Initially, the harddrive worked fine with USB3 (amazing ˜160MB/s), but I had some problems after putting after putting the harddrive to sleep manually after use (backup) with hdparm -Y. After some time, the device disappears from lsusb and i see the following in dmesg: [ 1924.091107] xhci_hcd 0000:05:00.0: xHCI host not responding to stop endpoint command. [ 1924.091114] xhci_hcd 0000:05:00.0: Assuming host is dying, halting host. [ 1924.091147] xhci_hcd 0000:05:00.0: HC died; cleaning up [ 1924.091233] usb 11-1: USB disconnect, device number 2 [ 1924.091272] sd 6:0:0:0: Device offlined - not ready after error recovery Testing with my (USB3 capable) notebook, I could not immediately reproduce the behaivour. I put the drive to sleep with hdparm -Y and waited for like an hour, but it was still listed in lsusb and responded after a few seconds delay when I tried after the hour of waiting. After an hour, on the desktop, the device would've usually vanished. Googling for this issue, I came across hints that playing around with IOMMU settings and upgrading the BIOS might help. I upgraded the BIOS and tried both with and without IOMMU enabled, got similar results. Most disturbing is, that one of the two USB 3.0 hubs sometimes also disappears from lsusb (or does not show up after boot at all). I've also heard that there are some hardware issues with ASUS USB3 ports. Applying mechanic force to the capble doesn't push the issue to one side or the other. Also, udev seems to reenumerate all devices if I plug the HDD into the USB 3.0 port without success (I can notice from my keyboard layout being changed to the default, which I do not use normally). The drive is externally powered and the external power supply is plugged in (it also stays powered when unplugging from USB, although it will spin down then). So before I try to return the board, I wanted to find out whether this can be anything else than a failure on the motherboard?

    Read the article

  • How can I create an appointment on a shared Outlook calendar

    - by roryhewitt
    This isn't as basic a question as it may seem. Hence all the descriptive text... I and my team use Outlook 2007. I have my own personal calendar and also I share a calendar with others in my team (which is mainly used to notify everyone of vacations etc.). The others in my team are NOT technical people. I would like to create a shortcut or template that any of us can use to create an appointment in the shared calendar. My initial though was to create a new appointment, but rather than actually put it in the calendar, I would save it as an Outlook Template (.oft) file. Once created, I would send this to my team and tell them to put it in their Templates file and put a shortcut on their desktop. Then, if they want to put a vacation in the shared calendar, they just double-click on the shortcut, change the dates etc. and then save & close it. However, when I do that, it doesn't save the fact that it's an appointment on the shared calendar - it just adds the appointment to the team member's personal calendar. There doesn't seem to be a way to specify a calendar in the template. I've also tried this by saving the template as a .ics or .vcs file, with no better luck. Additionally, if a team member adds an appointment to the shared calendar, other 'sharees' aren't notified, unless the appointment is actually created as a meeting and the other sharees are explicitly invited. I found this online (http://office.microsoft.com/en-us/outlook-help/keep-everyone-informed-about-time-away-from-the-office-HA010209819.aspx) which APPEARS to say that what I want to do isn't built in functionality (since it shows a bunch of steps to go through. I'd PREFER not to have to add this stuff to everyone else's personal calendar directly. So... Is this possible to do, natively (i.e. directly in Outlook)? Would a Sharepoint calendar make more sense and allow this functionality? Is there a way to do what I want which will allow the other team members to be notified? Like I said, I'm looking for as simple an interface as possible - these people aren't going to want to do much more than open something and change dates. Additionally, they're probably not going to have any fancy software on their PC's, although they will be up to date with Java and (maybe) .NET frameworks. Also, before anyone gets funny, yes, this has to work with Outlook 2007, as it's a corporate standard - we're not able to change that, even though e.g. Google Calendar would do this wonderfully. Obviously if this functionality is available in Outlook 2010, then fantastic - we might be able to upgrade. Thanks!

    Read the article

  • I have been told to accept one error with Memtest86+

    - by DustByte
    Bought a new computer back in August with 4x4 GB RAM. Had problems with the RAM. They sent me four new sticks, which also generated errors. Singled out four sticks (from the eight I now had) that didn't generate any errors. Discovered by coincident a new RAM error last week (this time no BSOD). Contacted the company. According to them there have been issues with a bad stock from last summer so I got two tested 8 GB sticks sent to me. Been running Memtest86+ over the weekend. After 20 hours I got an error (see attached photo). The test has now been running for 37 hours but so far only this one error. I contacted the company where I bought the computer. They wrote back: I wouldn't worry about hat one fail. We have had similar situations here whereby it passes numerous times but then fails once. We think it's an issue with memtest, after all memory is faulty or it isn't so you can't really have it pass a few times, fail the next time around and then pass again! Please trust me on this and continue with the memory we sent you and if your problems continue we'll look at getting it replaced again. I gather from other forum posts that many people do not accept a single error. What could this single error signify, faulty RAM or a glitch in the MEMTEST program (or other)? Update: From the helpful comments below I conclude that an occasional (and rare) "random" error could occur and be acceptable, but repeated errors at the same address would indicate malfunction. Memtest has now run for 45 hours and I still have only one error. For everyone's information, I will keep running the test. In less than two days I am going away for a month. I will most likely leave Memtest running. As I do not have a UPS there is a risk that a power outage will ruin the experiment. The computer is a desktop so I cannot bring it with me (which would curiously have exposed it to more cosmic rays as I will be flying ;)).

    Read the article

  • Melting Laptop Power Supply Tip

    - by AlReece45
    Several (6-7) months ago, my laptop power supply cord got a cut in it and stopped working. Having gotten cheap (and short) power supplies in the past, I decided to buy 2 brand new ones from the manufacturer (ASUS). Now, I used my laptop a little less than usual between February and March. During that time I noticed a few times that the power supply, even though plugged in, did not provide power. Often the computer would just off on me. I figured it was just that one power supply being bad. I had left the alternate at my parent's house in another state and asked them to ship it to me. Now, at work the other day I wanted to get a file off the of hard disk. So I booted it up, knowing that it had a low battery, plugged it in. During the first 2 minutes of use, I was told that the battery was low and I should plug it in. I unplugged it, inspected the end (Being plugged in, this was suspicious), and decided I shouldn't plug it back in-- the plastic on the tip was melting from the heat of the metal on the tip. The computer had simply booted up and I had the file-manager open. It had not been on for more than 10 hours. Now I know that computers tend to get pretty hot. However, the melting point of plastic is usually above 200C.. so that's much hotter than the computer should be generating. I went and bought a THIRD power supply. This time a universal one from Best Buy (it was very fast to buy and test). I tried it out on the computer and it's tip is melting as well. My older laptop that uses the universal power supply uses it perfectly (has been about a week and a part of use now). I have tried using the computer without the battery, with the same effect. Obviously, this is not a problem with the power supply. My room mate and I being trained computer techs were contemplating taking the computer apart and desoldering and resoldering on the power tip. (The computer is about 6 months out of its 2-year warranty). We're hoping that will correct the issue as I would prefer to devote my money on a Good Desktop rather than yet ANOTHER $1200+ laptop. Is there any thing I'm missing here that might cause the the tip on the power unit to melt?

    Read the article

  • Be your own cloud [closed]

    - by Jedi
    I have reasonably many electronic gadgets that can go LAN or WI-FI. But how do you share and/or syncronize all your files among them? Well, between my laptops and my desktop I use Dropbox. A nice way to share files among computers. But what if your HDD on your laptop is not large enough to carry music, pictures and films. Normally you would buy an extern USB HDD and store them there, but then you cannot reach the files from other computers which are not connected to the USB device. Many would say I should use a solution like a cloud with a disc station or something like that. But my needs are follows: A mass storage which can be reached among devices (laptops, desktops, iPhone, Android phone, XBox or Playstation). Has low power requirements and is silent. Can be reached inside home and it would be nice if it could be reached outside home as well. Cheap I have looked around and I have found an wireless router which can share a USB device: D-Link Wireless N HD Media Router. I thought it would be an interesting solution for a simple local cloud solution. D-Link uses a little program called SharePort Plus which mount the USB device to your computer. Unfortunately is the transfer rate to the USB storage device rather disappointing. The transfer rate was 5.8 Mbps even though the distance between the laptop and the router was 2 meter. The same is happening when I use cable from the computer to the router. Another thing is that SharePort Plus only allows one computer be connected to the device at a time. The last thing was something I could live with. I have search on the Internet for other solutions and found this video from Synology. I'm not sure if their solution is the right one. I think a disc station connected to my home LAN could the right solution. What have you done in your home to store and share files among your computers and game consoles?

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Windows 7 reboot and freezing, possible power problems?

    - by mikelbring
    My Gateway LX Series desktop is about 6-8 months old. When I bought it, it had Windows Vista. I then put the RC version of Windows 7 on it. About 3 months after I bought it, it would randomly start to reboot, actually just shut off. I monitored the temperature levels and they seemed normal. So I installed a fresh Windows 7 Ultimate OEM 64bit. It actually got worse and would reboot more frequently. I then contacted Gateway and they said my machine was built for Windows Vista (made me chuckle), and told me to update my BIOS. So I did, and it was fixed for a good couple months. Recently, it started to do it again. Now I noticed early on it was doing it most often, if not every time when I was either watching a flash video or playing a flash game. So I decided to download the drivers again and I also downloaded my motherboard drivers. Seemed to be okay. A week later it started doing it again. And now it's doing it even more frequently. Sometimes I would turn it on, login into Windows and *BAM!* it would shut off. Now I am at the point where I can hardly get it to turn on. It would freeze at the point where it says "Starting Windows", with the Windows logo. Sometimes it would say "Checking disk for consistency" or whatever and freeze there (not shut off, just freeze). I even got the prompt to launch startup repair. But that also freezes when it says starting Windows. It does not really freeze, just never loads up. I am kind of lost as to what's going on. I have a few ideas but nothing I want to pursue (graphics card? hard drive?). Another thing I did try was to boot into a live disk of Ubuntu and try to launch every program I could and get on the internet but I never got it to reboot. So it sounds like to me it's a Windows thing, but I have no idea. I am just stuck and would like to see if any one has any ideas or could lead me in the right direction.

    Read the article

  • If Nvidia Shield can stream a game via wifi, why can I not do the same via ethernet to any other PC?

    - by Enigma
    I think it absurd that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. I can only assume that there must be some reason behind this or a limitation preventing this, but what? 150mbps wifi is in no way superior to a 1000mbps LAN connection aside from well wireless mobility. Not only that but I have a secondary laptop and desktop which should by hardware comparison completely outperform anything the Tegra in the Nvidia Shield can do. Is this all just a marketing scheme to force people to buy the shield for the streaming benefit? Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable for me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? (*) - perhaps this isn't the first but afaik it is the first complete package.

    Read the article

  • What is the difference between running a Windows service vs. running through shell?

    - by Zack
    I am trying to troubleshoot an issue on a Windows 2008 server where running attempting to connect to a "Timberline Data Source" ODBC driver crashes if the call is in a "service" context, but succeeds if the call is initiated manually in a Remote Desktop session. I have set the service to run as my user. I'm wondering if, all else being equal (user, machine, etc), are there any fundamental security/environment differences between running a process as a service vs manually? --- Implementation Details --- In case it is helpful for anyone, I had a system that started as an attempt to connect to a Timberline Database using ODBC and a Python CGI script called via IIS 7. The script itself works fine, however, as soon as I attempt to perform the ODBC connect function, the script crashes without throwing an exception. The script was able to connect fine when executed via command line. The same thing happened when using a C#/.net service, attempting to run via Apache, Windows Scheduler or even a 3rd party scheduling tool. With the last option (the 3rd party scheduling tool, pycron) I set the service up log in as my user and had the same issue (I confirmed via Task Manager that the process running user was, in fact, me). It just doesn't make sense to me why a service, which should be running as my user, appears to still be operating in a different security context or environment. Also, if it's important, the Timberline database is referenced by computer name on the network ("\\timberline-server\Timberline Office\Accounts\AT" or something to that effect) I also realized that, as Joel pointed out, the server DOES have a mapped drive ("Y:" which is mapped to "\\timberline-server\Timberline Office") The DSN is set up at the "System DSN" level which, according to the ODBC Administration Tool, means that the DSN is available to users and services Since I'm not allowed to answer this question yet, I'll post the solution that I arrived on: As Joel Coel mentioned, there actually was a mapped drive scenario. I didn't realize this because the DSN specified a path using UNC. However, it seems as though the actual Timberline Driver referred to a mapped drive. Since services don't start with the mapped drive, I was forced to add the drive mapping code into my service. Since it was written in python, I used code from a Stackoverflow answer that was able to map the drive on the fly.

    Read the article

  • Troubleshooting Network Speeds -- The Age Old Inquiry

    - by John K
    I'm looking for help with what I'm sure is an age old question. I've found myself in a situation of yearning to understand network throughput more clearly, but I can't seem to find information that makes it "click" We have a few servers distributed geographically, running various versions of Windows. Assuming we always use one host (a desktop) as the source, when copying data from that host to other servers across the country, we see a high variance in speed. In some cases, we can copy data at 12MB/s consistently, in others, we're seeing 0.8 MB/s. It should be noted, after testing 8 destinations, we always seem to be at either 0.6-0.8MB/s or 11-12 MB/s. In the building we're primarily concerned with, we have an OC-3 connection to our ISP. I know there are a lot of variables at play, but I guess I was hoping the experts here could help answer a few basic questions to help bolster my understanding. 1.) For older machines, running Windows XP, server 2003, etc, with a 100Mbps Ethernet card and 72 ms typical latency, does 0.8 MB/s sound at all reasonable? Or do you think that slow enough to indicate a problem? 2.) The classic "mathematical fastest speed" of "throughput = TCP window / latency," is, in our case, calculated to 0.8 MB/s (64Kb / 72 ms). My understanding is that is an upper bounds; that you would never expect to reach (due to overhead) let alone surpass that speed. In some cases though, we're seeing speeds of 12.3 MB/s. There are Steelhead accelerators scattered around the network, could those account for such a higher transfer rate? 3.) It's been suggested that the use SMB vs. SMB2 could explain the differences in speed. Indeed, as expected, packet captures show both being used depending on the OS versions in play, as we would expect. I understand what determines SMB2 being used or not, but I'm curious to know what kind of performance gain you can expect with SMB2. My problem simply seems to be a lack of experience, and more importantly, perspective, in terms of what are and are not reasonable network speeds. Could anyone help impart come context/perspective?

    Read the article

  • Display is slightly blurry on (native) 1920x1080 resolution

    - by Martin Tuskevicius
    I have a computer monitor that is approximately 23" in size. Its native resolution is 1920x1080, and Windows 7 will not allow it to be any higher. However, I cannot make the resolution a little lower as well. When I right-click on my desktop and select 'Screen resolution,' the vertical slider has only two options: 1920x1080 and 1280x720. There are no real problems that I am having besides the fact that the image is slightly blurry. I can easily make things out and see them, but I definitely feel that the image is not as clear as it could be. My graphics card is ATI Radeon HD 5450 and it has the latest graphics drivers installed. I've tried playing around with the AMD VISION Engine Control Center to see if I can change an option to make the image clearer, but I had no luck. I did find one odd thing, though. When I lowered the refresh rate from 60Hz to 50Hz, the image kind of "zoomed in" but it also became perfectly clear like I would expect it to look. The problem is that when I use 50Hz, the image zooms in a little on the center and I lose maybe an inch and a half of the screen (I do not see the bar at the top of applications, I do not see the Windows taskbar thing, etc). I figured if I could somehow zoom in so that the entire image fills the screen (not the slightly cropped version) then I would have the perfectly crisp image of 50Hz, and also the uncropped image of 60Hz. However, upon zooming in, the image began to look blurry again just like it did with 60Hz. So I am at a loss here. I do not know how to make the image look as clear as it should. I have the latest drivers (I updated them today) and I know that my monitor supports the resolution that I am trying to use. Has anybody experienced something like this before? I'd really appreciate any input - thanks! Update: I have figured out how to make the display look crisp! I set it to the 50Hz option, and then I changed the scaling through the monitor itself, rather than software. Now, however, I am finding that games look pretty bad because since it is clear, the lower quality really becomes apparent. I cannot run new games at 1080p, so I run them at the lowest resolution possible (1280x720, since it is the only other option offered, as I have mentioned). So I am wondering, is there a way to have Windows display more resolution options?

    Read the article

  • Slow WLAN file transfer between server and tablet

    - by user266985
    My file server is running Ubuntu 12.04 and I'm sharing files from it over samba. It is connected via gigabit ethernet. My desktop, running Windows 8.1, is also connected via gigabit ethernet. I can transfer files between the two and completely saturate that gigabit pipe. However, I just got a Surface Pro 2, and I'm trying to stream HD movies from my server to the device over WiFi. For some reason, I can't break much past 1.5MB/s transferring files over the network. I've tried streaming through XBMC and a standard file copy; no difference. To add the confusion, if I connect to my guest network and then use my VPN server (installed on the router) to access the file server, I get around 3.2MB/s. I've been running diagnostics to determine the root and I think I've found it but I have no idea what is causing it or how to fix it. Router: Asus RT-N66U Surface Pro 2 Network Card: Marvell Avastar 350N (Driver 19/09/2013 v14.69.24044.150) InSSIDer: Link Score: 100 Co-Channels: 0 Overlapping: 0 5GHz Network Channel: 48+44 iperf File Server as Server; Surface Pro 2 as Client - TCP Performance: Acceptable ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.90 port 5001 connected with 192.168.0.56 port 57367 [ ID] Interval Transfer Bandwidth [ 4] 0.0- 1.0 sec 10.1 MBytes 84.7 Mbits/sec [ 4] 1.0- 2.0 sec 10.4 MBytes 87.6 Mbits/sec [ 4] 2.0- 3.0 sec 10.6 MBytes 88.8 Mbits/sec [ 4] 3.0- 4.0 sec 10.7 MBytes 89.5 Mbits/sec [ 4] 4.0- 5.0 sec 10.1 MBytes 84.4 Mbits/sec [ 4] 5.0- 6.0 sec 10.2 MBytes 85.8 Mbits/sec [ 4] 6.0- 7.0 sec 7.04 MBytes 59.1 Mbits/sec [ 4] 7.0- 8.0 sec 10.8 MBytes 90.2 Mbits/sec [ 4] 8.0- 9.0 sec 10.6 MBytes 89.1 Mbits/sec [ 4] 9.0-10.0 sec 8.62 MBytes 72.3 Mbits/sec [ 4] 0.0-10.0 sec 99.2 MBytes 83.1 Mbits/sec iperf Surface Pro 2 as Server, File Server as Client Performance: Poor ------------------------------------------------------------ Client connecting to 192.168.0.56, TCP port 5001 TCP window size: 22.9 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.0.90 port 40233 connected with 192.168.0.56 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 1.0- 2.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 2.0- 3.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 3.0- 4.0 sec 1.25 MBytes 10.5 Mbits/sec [ 3] 4.0- 5.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 5.0- 6.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 6.0- 7.0 sec 1.38 MBytes 11.5 Mbits/sec [ 3] 7.0- 8.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 8.0- 9.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 9.0-10.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 0.0-10.1 sec 15.0 MBytes 12.4 Mbits/sec For some reason, it gets capped and I haven't got a clue why. Any suggestions? Edit: My link speed is reported as 270Mbps by Windows. I'm less than two metres from the router with a clear line of sight.

    Read the article

  • How to prevent dual booted OSes from damaging each other?

    - by user1252434
    For better compatibility and performance in games I'm thinking about installing Windows additionally to Linux. I have security concerns about this, though. Note: "Windows" in the remaining text includes not only the OS but also any software running on it. Regardless of whether it comes included or is additionally installed, whether it is started intentionally or unintentionally (virus, malware). Is there an easy way to achieve the following requirements: Windows MUST NOT be able to kill my linux partition or my data disk neither single files (virus infection) nor overwriting the whole disk Windows MUST NOT be able to read data disk (- extra protection against spyware) Linux may or may not have access to the windows partition both Linux and Windows should have full access to the graphics card this rules out desktop VM solutions for gaming I want the manufacturer's windows graphics card driver Regarding Windows to be unable to destroy my linux install: this is not just the usual paranoia, that has happened to me in the past. So I don't accept "no ext4 driver" as an argument. Once bitten, twice shy. And even if destruction targeted at specific (linux) files is nearly impossible, there should be no way to shred the whole partition. I may accept the risk of malware breaking out of a barrier (e.g. VM) around the whole windows box, though. Currently I have a system disk (SSD) and a data disk (HDD), both SATA. I expect I have to add another disk. If i don't: even better. My CPU is a Intel Core i5, with VT-x and VT-d available, though untested. Ideas I've had so far: deactivate or hide other HDs until reboot at low level possible? can the boot loader (grub) do this for me? tiny VM layer: load windows in a VM that provides access to almost all hardware, except the HDs any ready made software solution for this? Preferably free. as I said: the main problem seems to be to provide full access to the graphics card hardware switch to cut power to disks commercial products expensive and lots of warnings against cheap home built solutions preferably all three hard disks with one switch (one push) mobile racks - won't wear of daily swapping be a problem?

    Read the article

  • My USB bootable thumb drive, no longer boots on a single particular computer on which it previously worked

    - by LiamMeron
    I created a bootable drive, booting CrunchBang, about 2 months ago. About a month ago, I booted into it on another laptop. After shutting down, I have been unable to get it to boot on my own laptop, despite having worked previously. I can still boot into it on my desktop, and can also boot into it on all other computers that I have tried. If I plug it in when I am running Ubuntu, the Home and / folders mount, the only error being that for some reason my PC likes to try and mount the swap partition too, which naturally, gives an error. The BIOS settings are all still setup to boot from USB. When I boot, all I get is a black screen with the white cursor, it will stay there for as long as I leave it. If it is worth anything, I have GRUB loader installed on the drive. The partitions look a little odd, but I am rather unfamiliar with how they are dealt with. The first partition, /, is sdb1 and has the bootable flag. The second partition is an extended system, and is sdb2. The third partition, according to GParted, seems to be nested under the second. This is the swap partition, and it is sdb5 The fourth partition is my home partition and is sdb6 and is also nested under the extended system. The first and fourth partitions are ext4. I don't know if that helps, but the more info, the better accuracy, generally. Thanks. EDIT: I tried reinstalling GRUB on the drive, but that didn't work. However, when I reinstalled GRUB on my laptop, I did it with my USB thumbdrive in. This caused the GRUB updater to find the /boot folder and add the proper details into my laptop's GRUB loader. I could log into CrunchBang from my laptop's GRUB but I was still unable to boot directly to the drive. It looked like my BIOS is unable to find the bootloader. I am unable to install GRUB to a partition I just created, a /boot partition at the start of the drive, GRUB just doesn't allow it. I think I'm going to have to reinstall #! on my drive, which won't be a great loss as I haven't put much time into it.

    Read the article

  • Ubuntu + SSL ports + AVAST

    - by jurajvt
    I have an interesting problem with communication via standard SSL ports. Fresh installed Ubuntu 14.04 server + Postfix + Dovecot, SASL authentication provided by Dovecot, self-signed certificate generated trough the Dovecot script mkcert.sh. Redirected ports on ZyWALL USG 200. I can send and receive e-mails from outside with standard ports 25 and 110, but not over 587. I am connecting to my server from machine with Windows 8.1 + VMWare Player + Ubuntu 14.04 Desktop + ssh. On Windows host I have installed Avast! antivirus. When I am trying to telnet from virtual machine to server over 587, it refused connection. But when I turn on Avast! it let me in to message Connected to... Same with nmap. When Avast! is turned on it is show me all SSL ports. When I turned it off, only standard ports appeared. OpenSSL shows me CONNECTED(00000003). But outside virtual machine directly in Windows 8.1 using nmap with zenmap there are not opened SSL ports in both Avast! states. From other external linux machines are problems with touching SSL ports same - refused. I have turned on submission in master.cf and 587 port is correctly listening on 0.0.0.0 in process master.pid which belongs to Postfix. I can telnet, or nmap over port 587 to my domain directly from server. Other ports like 995, 993 are OK on localhost, too. It is true, that I can't send emails via 587 anyway (Avast! turned on/off), but I can see ports opened. It is possible, that I have simply bad certificate and Avast! has right one, so with turned it on I can see opened ports? EDIT: To be more clear, I can't see or using port 587 everywhere from outside (tried Thunderbird, telnet, openssl, nmap, putty, swaks; both from Linux or Windows machines) and that is my problem. It was only by chance that I saw opened ports when Avast! is turned on.

    Read the article

  • Computer randomly reboots during "intensive activity".

    - by Reznor
    My friend has been playing games on his new build for some time now. However, lately, his computer will randomly reboot out of nowhere, so far only happening in game, and presumably only to happen in game as it happens nowhere else. This can happen in game during play or even in the options. Note, it isn't a crash or blue screen. It's just a normal reboot. This started today, I believe, and has only occured in two games: Dead Space and Stalker: Shadow of Chernobyl. He has played a handful of games before these, for about a week or so, without this problem. We theorized on two possibilities: Maybe something is overheating? Maybe the power supply is inadequate? These two were quickly dismissed, as all his components were operating at normal temperatures when he got back to his desktop from the reboot, and we all know these parts don't exactly cool down quickly, especially if they get hot enough to trigger a reboot. Besides, I know at-least my motherboard reports processor overheating at start-up, and requests I press f1 to continue into boot. The PSU one was dismissed too. He has an 850w power supply on a rig that was estimated to take only 720 some watts, that's with some overcompensating to be safe. He opened up his case to make sure nothing was seated wrong or in the way. All was fine, but he did notice a sticker on his video card. It had a giant barcode on it and some numbers. Now, I'm used to seeing these stickers, they're the warranty stickers, right, and removal voids the warranty? Yeah, well, we find it odd because this sticker is slapped right over the circuits of the video card, not on a block or anything. Is this normal? Should he remove it? Right now, I am concerned with the memory. Could that be at fault? Here are his specs: Windows 7 Home Premium, 64-Bit Intel i7 950 EVGA GeForce 570 GTX 4 GB DDR3 PC10666 dual-channel Corsair RAM Corsair 850w PSU Gigabyte GA-X58A-UD3R Western Digital 1 TB WD1001FALS

    Read the article

  • domain/IN: has no NS records

    - by thejartender
    I have set up a home web server using Ubuntu 12.10 and I can safely say that it works with regards to router forwarding and ports being found. I know this, because switched my hosting provider's VPS SOA record to use my ISP IP with an 'A' value and had my website running from home. This verified that my server was configured correctly so I started what I believe to be the final step in making my old desktop into a full DNS server. I found this tutorial that got me started My LAN network consists of the following: My router with a gateway of 10.0.0.zzz My server with an IP of 10.0.0.xxx A laptop with an IP of 10.0.0.yyy Step 1: I installed bind via sudo apt-get install bind9 Step2: I configured /etc/bind/named.conf.local with: zone "sognwebdesign.no" { type master; file "/etc/bind/zones/sognwebdesign.no.db"; }; zone "0.0.10.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.0.10.in-addr.arpa"; }; Step3: Updated /etc/bind/named.conf.options with two ISP DNS addresses Step 4: Updated /etc/resolv.confwith: nameserver 10.0.0.xxx search lan search sognwebdesign.no Step5: created a ``/etc/bind/zones directory Step6: Created /etc/bind/zones/sognwebdesign.no.dbwith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 3600 604800 38400 ); sognwebdesign.no. IN NS ns1.sognwebdesign.no. sognwebdesign.no. IN NS ns2.sognwebdesign.no. sognwebdesign.no. IN NS ns3.sognwebdesign.no. NS1 IN A 10.0.0.1 NS2 IN A 10.0.0.2 NS3 IN A 10.0.0.3 www IN A 10.0.0.4 yuccalaptop IN A 10.0.0.19 gw IN A 10.0.0.138 TXT "Network Gateway" Step 7: created/etc/bind/zones/rev.0.0.10.in-addr.arpawith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 604800 604800 86400 ); zzz IN PTR gw.sognwebdesign.no. 1 IN PTR ns1.sognwebdesign.no. 2 IN PTR ns2.sognwebdesign.no. 3 IN PTR ns3.sognwebdesign.no. yyy IN PTR yuccalaptop.sognwebdesign.no. I then restart bind and dig-x sognwebdesign.no and it works Lastly I perform named-checkzoneon each of my zone files, but me reverse zone fail fails with: sognwedesign.no/IN: has no NS records Can anyone explain what I am doing wrong here or assist me in getting this configured correctly?

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • Wifi network stopped being visible (and usable) (Linksys wag320n)

    - by s427
    Basically, my wifi network simply stopped working for no apparent reason. It doesn't appear in the list of the available networks anymore. I can see all my neighbors' networks, but not mine. It's as if it doesn't exist anymore. The internet connection (non-wifi), which goes through the same modem/router, is fine though. I already had a similar problem about one year ago (see here: Wifi network SSID not visible ), just after buying this very modem. I finally got it to work after performing two factory resets and getting rid of the Cisco "Magic" software; but this time it's not working. I use a linksys router-modem (WAG320N) which is directly connected (via network cable) to my desktop computer (Windows 7). I have (mainly) two devices that use the wifi network: my phone (Samsung Galaxy Nexus) and an Asus tablet (TF201, aka Transformer Prime). I also resurrected an old laptop computer (Dell, running Windows XP) to test that, and it doesn't see anything either (apart from the 20 other wifi networks, of course ^^). This wifi network was working just fine and has been for about a year. I haven't touched the modem settings so I have no idea what's causing the problem. I tried: making my phone "forget" about my network, hoping it would see it again after that: no luck. re-entering the network informations (SSID/password) manually on my phone: still no luck (says it's not in range) exporting the modem configuration, resetting the modem (factory reset, via modem admin), restarting it, importing the configuration: nope. factory reset, turning it off for 15 minutes, restarting, re-factory reset, and entering the configuration manually: still nothing. Has anybody experienced something similar before? Have you any suggestion to fix that? Thanks in advance. PS: to clear things up, here are the settings of my modem regarding wifi: Basic wireless settings: Configuration: manual Radio Band: 2.4GHz Wireless Network Mode: B/G/N-Mixed SSID: s427 Channel Bandwidth: Wide - 40 MHz Channel Wide Channel: 9 - 2.452GHz Standard Channel: 11 - 2.462GHz SSID Broadcast: Enable Advanced Wireless Settings AP Isolation: Disable Authentication Type: Auto Basic Rate: Default Transmission Rate: Auto N Transmission Rate: Auto CTS Protection Mode: Disable Beacon Interval: 100 DTIM Interval: 1 Fragmentation Threshold: 2346 RTS Threshold: 2346

    Read the article

  • Configuring Novel iPrint client on ubuntu 13.10

    - by Mahdi Sadeghi
    Recently I have struggled a lot to make Novel iPrint client to work on my laptop. I need it to use Follow Me printers in our university(you can take your print form any printer). Using this tutorial from Novel, I tried to convert the rpm package and install it on Ubuntu 13.04 & 13.10. The post install script from installing generated deb package had a typo which I saw in post install messages and I fixed that. Now I have the client running. To see the client UI I installed cinnamon desktop(because unity does not have system tray and old solutions did'nt work to whitelist Novel clinet). I have iPrint plugin installed on firefox as well(I copied the shared object files to plugin directories). I try installing printers from provided ipp URL(which lists available printers on the server) with no success. After clicking the printer name I see this: I have various errors: Formerly firefox used to asked my network username/password for installing SSL printer but now it returns this: iPrint Printer - The printer is currently not available. However I can install non-SSL version but the printer location is either empty or points to: file:///dev/null even if I change it to the exact address which I see on working machines still it prints nothing. I have tried the novel command line tool, iprntcmd to print. It is being installed at: /opt/novell/iprint/bin/ msadeghi@werkstatt:/opt/novell/iprint/bin$ ./iprntcmd --addprinter ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me\ -\ IPP iprntcmd v05.04.00 Adding printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP. Added printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP successfully. It adds the printer with empty location and again no print. What I found interesting is the log file at ~/.iprint/errors.txt with strange errors which I hope somebody here can understand. When I try to install the SSL printer I receive these logs(note that HP is my local printer and has nothing to do with iprint): Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 676 Group Info: CLIB Error Code: 0 (0x0) User ID: 1000 Error Msg: Success Debug Msg: MyCupsDoFileRequest - httpReconnect failed (0) Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 1293 Group Info: CUPS-IPP Error Code: 1282 (0x502) User ID: 1000 Error Msg: iPrint Printer - The printer is currently not available. Debug Msg: MyCupsDoFileRequest - IPP SERVICE UNAVAILABLE Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified I should say that my friend can print using the same instructions on CrunchBang easily and another guy on 12.04 LTS but with more struggling. It worked for me on linux mint maya with my old laptop as well. Is there anybody out there who can help me to solve these problems? I am really disappointed with Novell and our university support. PS. I had the same problemwith 13.04. No matter if I am within the network or I connect with VPN, I have the same issues.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >