Search Results

Search found 109118 results on 4365 pages for 'one red network'.

Page 72/4365 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Android Nexus One - Can I save energy with color scheme?

    - by Max Gontar
    Hi! I'm wondering what color-scheme is more energy-saving for AMOLED display? I've already decided to manage c-scheme according to ambient light, thanks to this post: Somewhat-proof, the link posted by nickf: Ironic Sans: Ow My Eyes. If you read that in a well lit room, the black-on-white will be the most pleasant to read. If you read it in a dark room, the white-on-black will be nicer. But if I want to save battery power, should I use bright content with light background or vice versa? Is it possible anyway (they say it's not)? Thanks!

    Read the article

  • Spring-Hibernate: How to submit a for when the object has one-to-many relations?

    - by Czar
    Hi, I have a form changeed the properties of my object CUSTOMER. Each customer has related ORDERS. The ORDER's table has a column customer_id which is used for the mapping. All works so far, I can read customers without any problem. When I now e.g. change the name of the CUSTOMER in the form (which does NOT show the orders), after saving the name is updated, but all relations in the ORDERS table are set to NULL (the customer_id for the items is set to NULL. How can I keep the relationship working? THX

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Merge two different API calls into One

    - by dhilipsiva
    I have two different apps in my django project. One is "comment" and an other one is "files". A comment might save some file attached to it. The current way of creating a comment with attachments is by making two API calls. First one creates an actual comment and replies with the comment ID which serves as foreign key for the Files. Then for each file, a new request is made with the comment ID. Please note that file is a generic app, that can be used with other apps too. What is the cleanest way of making this into one API call? I want to have this as a single API call because I am in a situation where I need to send user an email with all the files as attachment when a comment is made. I know Queueing is the ideal way to do it. But I don't have the liberty to add queing to our stack now. So this was the only way I could think of.

    Read the article

  • Dual monitors with one above the other?

    - by Felix
    I'm using Gnome 3 and proprietary Nvidia drivers. I have tried to set in nvidia-settings my external monitor to be "above" my main one (it's a laptop). However, when I try to drag a window up from the main display to the external one, it gets stuck and can't move past a certain point. Trying to maximize it changes its decoration so it looks maximized (i.e. no borders, etc), but its size or position doesn't change. Now, if I set my external monitor to be "to the left" of the main one, it works, which is why I'm suspecting this is a Gnome issue, not an Nvidia one. Anyone know how to fix this? Update: some versions: Gnome: 3.2.2.1 Nvidia: 280.13 Update 2: I can see that Gnome 3.4 is out, and among the release notes is better external monitor support. However, they only mention a small fix that is unrelated to my problem. Can anyone with Gnome 3.4 and access to an external monitor please test this out and tell me if it works? I don't want to go through the hassle of upgrading my Ubuntu installation unless I know for certain it's going to fix the problem.

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • Red Box is not working

    - by palani
    Hi , I have install the Red box plugin by using the following command script/plugin install svn://rubyforge.org/var/svn/ambroseplugins/redbox . It installed successfully. and again ran the following command the following location /myapp/vendor/plugin/redbox/rake update_scripts . It shows me the following output (in /myapp/vendor/plugins/redbox) rake aborted! private method `copy' called for File:Class /home/myapp/vendor/plugins/redbox/Rakefile:28 (See full trace by running task with --trace) I don't know How to solve this ... Then i understand that "rake update_scripts" copying the Js and Css file only. so i manually copied the Redbox.js & redbox.css files into the respective places under /public folder I include the follwoing into my application.html.erb <%= stylesheet_link_tag 'redbox' % <%= javascript_include_tag :defaults % <%= javascript_include_tag 'redbox' % It included in the page successfully. The following is my view code : <%= link_to_remote_redbox('Red_box', :url = {:action= 'log'} ,:method ='get') % The popup box doesn't appear. I have no clue what is the exact error. Is that any Jquery clash? Please help me

    Read the article

  • how to make the red div don't alert 'ss' when drag the black div on it ,

    - by zjm1126
    i using jquery and jquery-ui, this is my code: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta name="viewport" content="width=device-width, user-scalable=no"> </head> <body> <style type="text/css" media="screen"> </style> <div id=a style="width:300px;height:300px;background:blue;position:absolute;"></div> <div id=b style="width:100px;height:100px;background:red;position:absolute;"></div> <div id=c style="width:50px;height:50px;background:black;clear:both"></div> <script src="jquery-1.4.2.js" type="text/javascript"></script> <script src="jquery-ui-1.8rc3.custom.min.js" type="text/javascript"></script> <script type="text/javascript" charset="utf-8"> $("#c").draggable({}); $("#b").droppable('disable');//this is not useful $("#a").droppable({ drop: function(event,ui) { alert('ss') } }); </script> </body> </html>

    Read the article

  • How to use AD/GPO/Print Services to "push out" a new printer driver to replace a broken one? How did my server get a broken driver?

    - by Zac B
    Context: We have an AD/GPO-managed corporate network with a little over a hundred PCs running Windows 7 x64, and a few managed printers. Our Server2008R2 primary domain controller is configured as a print server for them all. Problem: After a recent windows update and restart (no printer driver updates were included) on the DC, a particular shared printer (Lexmark T650) has begin exhibiting some strange behavior. First, it prints a preceding and following blank page for almost every document, on jobs submitted by about half of client machines (no separator page is configured on the server or any of the clients I've seen). Second, whenever someone tries to access "Printing Preferences" on any client, they recieve the following error message (this happens everywhere, 100% of the time, and didn't happen before the update on the DC): Once they click "OK", the prefs screen appears (with no separator page selected) and everything seems fine. I'm not even sure if these two issues are related, but everyone seems affected by one or both of those issues. What I've Tried: I've been hesitant to un-deploy the problem printer, or remove it via GPO, as it's pretty heavily used. I've tried updating (via MS update and our internal WSUS server) client machines and the DC. No printer driver updates have appeared, and no number of updates or restarts on the server or the client seems to have achieved anything other than my boss getting grumpy that I'm bouncing the domain controller so often. I've tried deleting the drivers on the server, and re-installing them from the original source that has worked for the past year...no change. I've tried selecting "New Driver" for one of the shared printers on a client machine, running as domain admin, and pushed the latest driver found by MSupdate back up to the DC. This changed the version number of the driver recorded in the print server manager, but caused no change--on the client I pushed from, or on any other. The error still appears. Question: Why the heck is this happening? Obviously, I got a bad driver from somewhere, but how do I get rid of it? I don't know of any "roll back drivers" functionality for centrally managed print drivers like Windows offers for other devices. How would I a) get this issue resolved on a client, and b) push the fix to the other members of the domain?

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • REPLACE Multiple Spaces with One

    Replacing multiple spaces with a single space is an old problem that people use loops, functions, and/or Tally tables for. Here's a set based method from MVP Jeff Moden. “Thanks for building such a useful and simple-to-use service”- Steve Harshbarger, CTO, 10th Magnitude. Get started with Red Gate Cloud Services and back up your SQL Azure databases to Azure Blob storage or Amazon S3 – download a free trial today.

    Read the article

  • C#.NET framework 3.5 SP1: satellite assemblies and FullTrust issues when the exe is on the network.

    - by leo
    Hi, I'm executing my .NET app from a network share. Since framework 3.5 SP1, and as explained here: http://blogs.msdn.com/shawnfa/archive/2008/05/12/fulltrust-on-the-localintranet.aspx, the main exe and all the DLLs located in the same folder (but not subfolders) are granted with FullTrust security policy. My problem is that I have subfolders for satellite assemblies with localized strings. Namely, I have: 1) FOLDER\APP.EXE 2) FOLDER\A whole bunch of DLLs 3) FOLDER\LANGUAGE1\Satellite assemblies 4) FOLDER\LANGUAGE2\Satellite assemblies 1 and 2 are automatically granted with FullTrust. 3 and 4 are not and my application is really slow because of that. Is there a way to grant 3 & 4 FullTrust security policy at runtime, since the application running has FullTrust? If not, is there a clean way to have satellite assemblies merged into only one DLL? Thanks a lot.

    Read the article

  • Load balancer - how to write one for a custom application?

    - by Poni
    Hi! I've written a simple server application which will run distributed on several machines. My question is how does a network load balancer works, in general? I've heard of round-robin and other algorithms, but what I haven't got answer to is how does the process really goes? In socket terms. The client connects to one of the load balancer machines, asks for a "free-to-connect-to" server and simply connects to it? That's the simpliest way I can think of. .. or, does it use the load balancer as a proxy (that implies that all the NBs must be always connected to the application servers, and data is transferred through them)? It's more of a general question. How would you do this? Thank you all!

    Read the article

  • How to convert the output of an artificial neural network into probabilities?

    - by Mathieu Pagé
    I've read about neural network a little while ago and I understand how an ANN (especially a multilayer perceptron that learns via backpropagation) can learn to classify an event as true or false. I think there are two ways : 1) You get one output neuron. It it's value is 0.5 the events is likely true, if it's value is <=0.5 the event is likely to be false. 2) You get two output neurons, if the value of the first is than the value of the second the event is likely true and vice versa. In these case, the ANN tells you if an event is likely true or likely false. It does not tell how likely it is. Is there a way to convert this value to some odds or to directly get odds out of the ANN. I'd like to get an output like "The event has a 84% probability to be true"

    Read the article

  • how to get files from one PC to another using c#.net?

    - by shruti
    im using c#.net..i want to get the files that are on the server PC to my PC..both the PCs are connected through network.. i have given IP address of that PC in the path...but its not copying the files to my folder. im using the following code ...but its not working..kindly help me out.. File.Copy(Path.GetFileName(sourceFile), Path.GetDirectoryName(targetpath)); in sourceFile i have given IP address + folder path of the server PC and in the targetpath i have given the path of the folder of my PC to which i want to copy the files..

    Read the article

  • Good way to send a large file over a network in C#?

    - by BFreeman
    I am trying to build an application that can request files from a service running on another machine in the network. These files can be fairly large (500mb + at times). I was looking into sending it via TCP but I'm worried that it may require that the entire file be stored in memory. There will probably only be one client. Copying to a shared directory isn't acceptable either. The only communication required is for the client to say "gimme xyz" and the server to send it (and whatever it takes to ensure this happens correctly). Any suggestions?

    Read the article

  • Best neural network for certain type of pattern analysis?

    - by fred basset
    Hi All, I'm working on a system that will send telemetry data on machine operation back to a central server for analysis. One of the machine parameters we're measuring is motor current drawn vs time. After an operation is finished we plan to send back an array of currents vs time to the server. A successful operation would have a pattern like a trapezoid, problematic operations would have a pattern completely different, more like a large spike in values. Can anyone recommend a type of neural network that would be good at classifying these 1D vectors of current values into a pass/fail type output? Thanks, Fred

    Read the article

  • Many network adapters at machine, need to find one that is used for traffic in Windows (from .net)

    - by viko
    My application use Web-service. I'm control from what workstation was request and for this send MAC-Address how parameter of all methods. But then I start testing application in real, I found workstations which have many network adapters - Ethernet, Wireless, Bluetooth. When I get MAC-address using next code: var networkAdapters = NetworkInterface.GetAllNetworkInterfaces(); if (networkAdapters == null || networkAdapters.Length == 0) return string.Empty; string address = string.Empty; foreach (var adapter in networkAdapters) { var a = adapter.GetPhysicalAddress(); if (a != null && a.ToString() != string.Empty) { address = a.ToString(); break; } } return address; Sometimes Web-service receive from workstation different MAC-Addresses, but I want get always only one MAC-address. Please, help me.

    Read the article

  • Nexus One Guys…Android 2.3 update comming your way

    - by Boonei
    Good News ! If you are a nexus one customer, Google said on its tweet “The Gingerbread OTA for Nexus One will happen in the coming weeks. Just hang tight!” Non-Nexus owners have to wait much much longer. Don’t know when their phone maker and operator will roll out the same. This article titled,Nexus One Guys…Android 2.3 update comming your way, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Networking is disabled after installing Maverick

    - by Zifre
    I recently installed Ubuntu 10.10 (Maverick Meerkat). Everything was working fine. Then I just started up the computer again, and the networking doesn't work. The network manager applet says "Networking disabled". The button is disabled, so I can't enable it. This question seems to be basically the same issue I have. managed in was set to false, but changing it to true does not fix the problem. Is there any other way to fix this problem?

    Read the article

  • Networking is disabled after installing Maverick

    - by Zifre
    I recently installed Ubuntu 10.10 (Maverick Meerkat). Everything was working fine. Then I just started up the computer again, and the networking doesn't work. The network manager applet says "Networking disabled". The button is disabled, so I can't enable it. This question seems to be basically the same issue I have. managed in was set to false, but changing it to true does not fix the problem. Is there any other way to fix this problem?

    Read the article

  • Deleting old tomcat version and setting a new one

    - by Diego
    I had Apache Tomcat installed by apt-get, however I decided to get a newer one, performed apt-get remove tomcat7 and apt-get purge tomcat7. I installed a newer one means the bundled Tomcat Server in NetBeans install. However, Im still seeing the old fashioned page from former Tomcat install: It works ! If you're seeing this page via a web browser, it means you've setup Tomcat successfully. Congratulations! This is the default Tomcat home page. It can be found on the local filesystem at: /var/lib/tomcat7/webapps/ROOT/index.html I already set a different port in the server.xml file and whenever I go that site after executing the startup.sh file with sudo permissions I'm not getting any site like server (new one) isn't running. How can I still be getting the page from old Tomcat install!? When I execute the startup.sh log says all is set OK, so why isn't it working?

    Read the article

  • Can I share a TV card?

    - by Boris
    My environment: 2 PCs, a desktop and a laptop, both on Oneiric they are connected together by ethernet wire nfs-common is installed and configured: the desktop is the server a TV tuner card is installed on the desktop, I can watch TV with the software Me-TV It works fine, TV on desktop, and my network too: I share folders thanks to NFS. But I would like more: How can I share my TV tuner card from the desktop and be able to watch TV on the laptop too? If possible I would like a solution that allows me to keep using the software Me-TV, on both PCs. I bet that there is a solution to create a fake TV card on the 2nd PC with xNBD. I'm trying xnbd-server --target /dev/dvb/adapter0/demux0 but I cant make it work. Trying to understand some examples of xNBD command lines, it seems to be meant only for sharing disk player. If someone as ever used xNBD, he's welcome.

    Read the article

  • How to Share Files Online with Ubuntu One

    - by Chris Hoffman
    Ubuntu One, Ubuntu’s built-in cloud file storage service, allows you to make files publically available online or share them privately with others. You can share files over the Internet right from Ubuntu’s file browser. Ubuntu One has two file-sharing methods: Publish, which makes a file publically available on the web to anyone who knows its address, and Share, which shares a folder with other Ubuntu One users. HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >