Search Results

Search found 5366 results on 215 pages for 'fully qualified naming'.

Page 105/215 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • Transferring Postfix install to new computer

    - by mlissner
    I have postfix installed on one computer, with DKIM and SPF working properly. What I'd like to do is start using a different computer instead, with the minimal amount of fuss. Mail servers have a way of baffling me, but I know there are things with cryptography going on here that I don't fully understand (and I don't really care to - I figured it out when I set up the last computer about a year ago, and am happy not to delve into it again). Right now, I'm working on the early steps of this process -- installing postfix on the new machine, and getting it going. Are there specific steps I could take to move the correct configs and key files and such to the new computer?

    Read the article

  • Why does't rsync use delta-transfer for local files ?

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • Cname to multi-level heroku subdomain

    - by user123424234
    I'm trying to create a cname that points from my custom domain (s.mydomain.com) to a multilevel subdomain hosted on heroku (me.myapp.herokuapp.com). I've created the Cname s.mydomain.com with the value me.myapp.herokuapp.com. When I go to s.mydomain.com it does not route to me.myapp.herokuapp.com, instead I get: method=GET path=/ host=s.mydomain.com dyno=web.1 queue=0 wait=0ms connect=4ms service=18ms status=404 It's possible I'm not fully understanding how this Cname should be setup. My desired outcome is for s.mydomain.com to act as if it were at me.myapp.herokuapp.com.

    Read the article

  • Has anyboy been able to install Pantograph for Blender on Windows?

    - by S.gfx
    I am specially interested on if somebody is actually doing/maintaining an installer for Windows, as there are quite several issues when installing all dependencies, etc. (for example, there might be someone already doing a Installbuilder installer for it and am just not aware of the matter) If not, at least if someone got it working and has some key tip to share. I am never able to fully get it all up and running. I'd love to have a not super complex way to install each new build of this great vector rendering module for Blender. Edit- Pantograph url: http://severnclaystudio.wordpress.com/bluebeard/a-users-guide-to-pantograph/

    Read the article

  • Completely formatting a USB flash drive

    - by efcjoe
    I have a Verbatim Store 'n' Go drive which by default comes with software in a partition which password protects the drive. I want to erase this partition, as it only works on Windows (I'm having to look at it now through a virtual machine on my Mac). I've tried using KillDisk to totally wipe the whole thing, but it doesn't seem to work, and this password-protecting partition always remains intact. Is there any program which will completely wipe a flash drive, no questions asked? Or is there a way to do it through the Verbatim software? I have the password and everything, I just can't find a way to fully format it. Cheers.

    Read the article

  • Win XP Pro, IIS 5.1, PCI Compliance

    - by Mudman266
    I have a client that was scanned and determined not to be PCI Compliant. I looked and they had IIS setup to allow a program from central office to push/pull info from their server. Many of the reasons they failed appeared to have been fixed in SPs (they were on SP2) or security updates. I fully patched the server to (Windows XP Pro) SP3 with all optional updates. I had them scan again and again they failed with only one less vulnerability that I manually corrected (server was showing debugging/error messages). The main issue I'm having is that when I research the CVE code for each error, they say they are fixed in SP2 and up. I'm wondering if I need to remove IIS and resetup since I have patched to SP3. Any ideas?

    Read the article

  • Are these symptoms of a video card is dying?

    - by K Cloud
    What I am getting from my PC: Scrambled text in the boot menu. The scrambled text dissapears and everthing goes normal after several restarts, or the computer has worked for some time. Display goes blank showing, windows kernel mode driver stopped responding. Sometimes the windows just hangs, I need to restart the pc for that. There used to be some scrambled colors in my windows, so I cleaned up my video carda and port and reinstalled the video card, things are something right. The pc runs normally for hours in safe mode, as diffault display driver works at that case. My PC specs: Core 2 Duo processor 2 GB RAM NVIDIA EN210 SILENT GPU Windows 8 Drivers details: NVIDIA drivers - v331.65, told to be fully compatible with Win8 Currently I am installing Windows 7 and will be putting the older version of driver v320 to test, yet I'm really confused weather my GPU is dying or still on go.

    Read the article

  • SSD runs faster on Windows as compared to Linux [closed]

    - by wushugene
    Windows 7 seems to install, boot and run much smoother & faster than each the three linux distros I have recently tried (Ubuntu 12.04 unity, Linux mint 13 MATE, and Fedora 17 on gnome 3.4). Why am I facing bad performance in Linux? I have tweaked my Linux installs for the SSD (enabling trim, disabling swap, etc.) I'm using an Acer TravelMate with i5-2410m processor, intel hd 3000 graphics, 8 gigs of ram, and a 256 gb samsung 830 ssd. Edit: Boot times are 10-15 seconds slower, there is noticeable delay from login to fully loaded desktop, and in general does not appear to be as responsive as my old windows 7 install or the Linux guests I had running on it.

    Read the article

  • How to write rules for persistent net names?

    - by ndemou
    I know that a process generates persistent network card names based on rules found in /lib/udev/rules.d/75-persistent-net-generator.rules. I also know how to completely disable this process with a simple echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules but I've read that I "could also write my own rules file to give the interface a name — the persistent rules generator ignores the interface if a name has already been set" (/etc/udev/rules.d/README confirms that this is possible). Do you have any pointers to documentation about how to write such rules? (I mostly care about Debian/Ubuntu and a bit less for CentOS) As a specific example of why I want to write custom rules: I have two identical servers with one onboard LAN and one PCI LAN. In case of HW failure I want to be able to move disks from HW#1 to HW#2 and it's important for eth0 to continue pointing to the onboard card and eth1 to the PCI card (no one wants to mess with cabling in the middle of a HW failure panic). My current workaround works but is a lot of work[1] so I wonder if writing custom rules would allow me to express something simple like this: cards with MAC A or B should be named eth0 cards with MAC C or D should be named eth1 follow default naming scheme for anything else [1] install the OS in HW#1 and keep a copy of /etc/udev/rules.d/70-persistent-net.rules. Move the disks to HW#2 and keep a second copy of the same file. Concatenate the two copies and manually edit the NAME="ethX" part. Replace /etc/udev/rules.d/70-persistent-net.rules with my version. Finally disable auto-creation of a new 70-persistent-net.rules using echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules

    Read the article

  • Should a MobiTex service with a highly resilient website offer content over WAP?

    - by makerofthings7
    I'm trying to offer services over the MobiTex network (also see wiki) and want to reduce double-work. I'm trying to understand if it is a good idea to WAP enable my website. Given that WAP usage is increasing (since MMS is a hybrid of SMS + WAP), and the FCC has required every operator in the 700Mhz range to implement it I'd like to fully understand if there are benefits to the technology for certain critical applications. For example, if GPRS allows SMS traffic, voice, and Data, presumably they are handled by different Gateways. If there is another gateway for WAP traffic I would think that it would act as a backup if the data gateway was overloaded. Are there resiliency benefits to using WAP on a critical website? i.e. Content delivery (push or pull)

    Read the article

  • Distributed website server redundancy

    - by Keith Lion
    Assume a website infrastructure is very complicated and is fully distributed (probably like most large web companies). Am I right in thinking that although there are all these extra web servers to handle multiple client requests, there is still a single "machine" whereby users must enter? I am guessing this machine will be the one physically associated to the IP address? I ask because I need to know whether, in places where distributed systems exist, there is still a single point of failure- usually the control node or, in this example, the machine connected to the public internet? Surely there cannot be two machines connected to the internet, as they would have to have different IP addresses? This "machine" may not be a server per se, but maybe it is a piece of cisco equipment. I just need to know whether, in the real world, these distributed systems still have a particular section where they depend on the integrity of one electronic device?

    Read the article

  • Repairing a TFS 2005 install or starting anew.

    - by Johan Buret
    Following : Installing Team Foundation Server on a shared database instance We did use a shared database instance setup for TFS 2005, and that was not a good idea, because of the Reporting Service dependency. The reporting instance on the server gives error code 404. What works now Basic source code control. We're able to check in and out source code. What doesn't work : Everything else, including : Opening and creating new team projects. Build automation. Internal bug tracking. Goal setup Having a fully working TFS install, and keeping the history. 1) A full install of TFS 2005 on the same server, but within its own database and reporting instance. 2) Using another server might be an option, but it's really not prefered Downtime should be minimum, my colleagues needs to be able to work on the source Readings I've read the MSDN page about moving/restoring TFS 2005, but I'm still unsure about what to do. Thanks in advance for help

    Read the article

  • caches domain user on local PC

    - by user630320
    We have a fully working domain in UK and around the world we have user who use VPN ( checkpoint) to connect to or domain. One of the user in USA has a laptop which he never logged on to before ( it does caches the user login details). Does anyone know how to cache user login information on this laptop. I have tried netdom trust to add this user to the laptop but i was not able to do this. At the moment user is logging in with a local administrator account and then using VPN to log on to our domain but when it comes to accessing files on domain user get access deieded. When user try to login it gets There are currently no log on servers available to service the logon request Does anyone know how to add user.

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • how to completly turn my mac into a host, mail server?

    - by idaho
    i have mac osx 10.6.3 (snow leopard) i have already setup mamp so it acts like a local server. but the next thing i wanna try doing is making it into a mail server where i can register an email locally maybe something like myname@myip or however its done. then having the ability to setup mx records, cname, and all that dns good stuff. this will only be for education purposes so that i can fully understand how it all works. what do i need to do? i read something about MailServeSnow but i cant get its domain to work , it must be down. would this be the best option? also would i need mac server or my os would be just fine? thanks

    Read the article

  • Photoshop changes colors when moved to a different monitor

    - by Jason
    This Yahoo! answers question pretty much sums up my predicament. Unfortunately I haven't really found an actual answer. Here's a screen shot of what happens when I place Photoshop between my monitors. Imagine that the left side of my laptop is the right square and the yellow is the right side of my monitor. When I move the program to one side fully or the other, it takes on the corresponding color. Obviously the color on the right is correct and Photoshop is changing the color of the image when it moves to my external monitor. Also notice how none of the tools or anything are different colors, just the actual image itself. How do I fix this? EDIT You may also notice that the color palette in the left corner is showing that yellow tinge. I didn't actually pick that yellow color, it's supposed to be showing the grey on the left that I used to fill in the square.

    Read the article

  • Which PSU should I chose? The biggest is the best?

    - by Shiki
    I'm fully aware of PSU's "Active PFC" and that they won't consume the written W all the time. (Makes sense). But now I'm before a PSU replacement (Guys: NEVER buy a Chieftec. Seriously.) The question is: If one can get a bigger one (in my case 750W and 650W) ... should that person go for the bigger one ? (The difference in price is not much). No, I don't think I'll soon use all that much. (Help (if you want of course) to make the question more generic if the question is really not OK in this form. I've been wondering about this for a time already. In my case it would be XFX Black Edition Silver 750W and 650W)

    Read the article

  • Formatting the output of a custom tool so I can double click an error in Visual Studio and the file opens

    - by Ben Scott
    I've written a command line tool that preprocesses a number of files then compiles them using CodeDom. The tool writes a copyright notice and some progress text to the standard output, then writes any errors from the compilation step using the following format: foreach (var err in results.Errors) { // err is CompilerError var filename = "Path\To\input_file.xprt"; Console.WriteLine(string.Format( "{0} ({1},{2}): {3}{4} ({5})", filename, err.Line, err.Column, err.IsWarning ? "" : "ERROR: ", err.ErrorText, err.ErrorNumber)); } It then writes the number of errors, like "14 errors". This is an example of how the error appears in the console: Path\To\input_file.xrpt (73,28): ERROR: An object reference is required for the non-static field, method, or property 'Some.Object.get' (CS0120) When I run this as a custom tool in VS2008 (by calling it in the post-build event command line of one of my project's assemblies), the errors appear nicely formatted in the Error List, with the correct text in each column. When I roll over the filename the fully qualified path pops up. The line and column are different to the source file because of the preprocessing which is fine. The only thing that stands out is that the Project given in the list is the one that has the post-build event. The problem is that when I double click an error, nothing happens. I would have expected the file to open in the editor. I'm vaugely aware of the Microsoft.VisualStudio.Shell.Interop namespace but I think it should be possible just by writing to the standard output.

    Read the article

  • preloading RSS contents in thunderbird, before actually reading them

    - by Berry Tsakala
    i have thunderbird 3.x, and i'm subscribed to several RSS feeds. How can I tell thunderbird to load/download any new RSS items in the background? The usual behavior with RSS feeds is that it download the headrs, or few introductory lines from the contents, but only when i'm clicking a feed item it starts loading "for real". I really want to receive the feeds and not to wait for them to load, the same way i receive emails in any email client - all messages are fully downloaded at once. there could be several reasons, BTW. - e.g. if i have short connection time, i'd rather connect, sync everything at once, and read it later. - or if i have a slow wifi connection, it's annoying to wait for each and every message, but the computer is idle while reading.. thanks

    Read the article

  • Odd Language In a BIOS Message

    - by Josh
    So I started up my laptop today and was greeted with the following message (not a direct quote): The type of the AC adapter cannot be determined. This may interfere with your computer's performance. Try unplugging the AC adapter and then plugging it back in, thanks. The problem was that I hadn't fully secured the plug into the back of the computer. However, I was a little taken aback when a message from BIOS said, "thanks." Is this normal? Any chance the message was illegitimate (virus)?

    Read the article

  • Remote offscreen rendering

    - by redmoskito
    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA. My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start. We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    Read the article

  • Installing Windows on multiple computers

    - by Rob
    At our work we've decided to buy SSDs to speed up our computers. We'd also like to upgrade from Windows 7 to Windows 8.1 (read clean install) I'm talking about 10 computers which going to have a fully clean installation. Is there any trick I can use to avoid installing Windows 10 times? The computers (most of them) are different in hardware, so I think duping is no good option. What can I do? I'd like to spend as less time as possible because all computers are going to need SQL Management Studio and Visual Studio 2012/2013. Thanks!

    Read the article

  • Detach a filter driver from certain drivers?

    - by Protector one
    The driver for my laptop's keyboard has a kernel-mode filter driver from Synaptics (SynTP.sys) attached. Is it possible to detach the SynTP.sys filter driver from my keyboard's driver, without detaching it from my Touchpad's driver? This Microsoft Support page explains how to completely disable a filter driver, but my touchpad requires SynTP.sys as well. I'm trying to do this is because the Synaptics driver disables my touchpad when I type. (Explained fully in this question: Use touchpad while "typing"?.) Since I don't have a solution to that problem, I figured removing the filter driver from the keyboard could prevent the Synaptics driver from detecting key strokes, thus stopping it from disabling the touchpad.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >