Search Results

Search found 18961 results on 759 pages for 'far se'.

Page 274/759 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Microsoft DNS creating subdomain when adding address record

    - by dwdet
    Hello, When attempting to add a normal A record (which has so far always worked), the Microsoft dnsmgmt app in MMC is returning a successful creation message "The host record oneworld.mydomain.com was successfully created". However, after refreshing the zone, it displays a folder icon next to "oneworld" indicating a sub domain, i.e. it is not the A record dnsmgmt said it created successfully. This is really strange behavior that has never happened previously. We have tried this on two separate PCs and remote consoled into the primary DNS server and tried adding the same A record with the same results. Any help is appreciated.

    Read the article

  • How do I classify using GLCM and SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. My glcm coding, as far as I have tried is, I = imread('fzliver3.jpg'); GLCM = graycomatrix(I,'Offset',[2 0;0 2]); stats = graycoprops(GLCM,'all') t1= struct2array(stats) I2 = imread('fzliver4.jpg'); GLCM2 = graycomatrix(I2,'Offset',[2 0;0 2]); stats2 = graycoprops(GLCM2,'all') t2= struct2array(stats2) I3 = imread('fzliver5.jpg'); GLCM3 = graycomatrix(I3,'Offset',[2 0;0 2]); stats3 = graycoprops(GLCM3,'all') t3= struct2array(stats3) t=[t1;t2;t3] xmin = min(t); xmax = max(t); scale = xmax-xmin; tf=(x-xmin)/scale Was this a correct implementation? Also, I get an error at the last line. My output is: stats = Contrast: [0.0510 0.0503] Correlation: [0.9513 0.9519] Energy: [0.8988 0.8988] Homogeneity: [0.9930 0.9935] t1 = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 Columns 7 through 8 0.9930 0.9935 stats2 = Contrast: [0.0345 0.0339] Correlation: [0.8223 0.8255] Energy: [0.9616 0.9617] Homogeneity: [0.9957 0.9957] t2 = Columns 1 through 6 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 Columns 7 through 8 0.9957 0.9957 stats3 = Contrast: [0.0230 0.0246] Correlation: [0.7450 0.7270] Energy: [0.9815 0.9813] Homogeneity: [0.9971 0.9970] t3 = Columns 1 through 6 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9971 0.9970 t = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9930 0.9935 0.9957 0.9957 0.9971 0.9970 ??? Error using ==> minus Matrix dimensions must agree. The images are:

    Read the article

  • Sar data not collected for 10 minutes

    - by Ichorus
    We have a RedHat server whose only job is to run a JBoss server. Monitors said that memory usage spiked (we have the JVM limited to far less than the total memory on the system) and JBoss crashed. We restarted and everything seems ok now. Odd thing is that sar data for 10 min leading up to the crash is simply not there. Load average got up to the 50s. I have seen severely busy systems (350+ Load Average) still collect sar data. Does anyone have any idea what could cause sar to stop collecting data?

    Read the article

  • Ignoring GET parameters in Varnish VCL

    - by JamesHarrison
    Okay: I've got a site set up which has some APIs we expose to developers, which are in the format /api/item.xml?type_ids=34,35,37&region_ids=1000002,1000003&key=SOMERANDOMALPHANUM In this URI, type_ids is always set, region_ids and key are optional. The important thing to note is that the key variable does not affect the content of the response. It is used for internal tracking of requests so we can identify people who make slow or otherwise unwanted requests. In Varnish, we have a VCL like this: if (req.http.host ~ "the-site-in-question.com") { if (req.url ~ "^/api/.+\.xml") { unset req.http.cookie; } } We just strip cookies out and let the backend do the rest as far as times are concerned (this is a hackaround since Rails/authlogic sends session cookies with API responses). At present though, any distinct developers are basically hitting different caches since &key=SOMEALPHANUM is considered as part of the Varnish hash for storage. This is obviously not a great solution and I'm trying to work out how to tell Varnish to ignore that part of the URI.

    Read the article

  • "Are You There?".. India Tops Logistics List of Emerging Nations

    - by [email protected]
    It's just amazing how far, wide and deep modern supply chains are extending. AMR reported on 15 Apr (M.Burkett, A.Reese) in a SCM webcast that 'Penetrating Emerging Markets" was the top priotiy for organizations based on a recent survey. I took this as both adding new consumers to their prospect-list as well as leveraging 'lower cost labor arbitrage". (Read '3 Billion Capitalists") Supply Chain Quarterly reports that India and Brazil received the highest ranking of the logistics markets in developing nations India tops the list of emerging nations that scores the attractiveness of logistics markets to foreign investors. Developed by the UK-based research firm Transport Intelligence, the new  Emerging Market Logistics Index rated 38 developing countries on 3 factors. 1. "Market size and growth attractiveness," considered a country's economic output, projected growth rate, and population size.  2. "Market compatibility," which examined how well-matched a nation was with the services offered by global logistics providers. This includes a country's security levels, market accessibility, foreign direct investment, distribution of wealth and population, and development of its service sector. 3. "Connectedness," which rated the efficiency of customs and border controls, liner shipping connections, and transportation infrastructure. India claimed the top spot due to its market size and growth prospects. Brazil is second because of its economic performance, good levels of market accessibility, and improving domestic and international transport connections. Are you there? For more information see www.transportintelligence.com/articles_papers. The top 10 emerging countries India Brazil Indonesia Mexico Russia Turkey United Arab Emirates Egypt Saudi Arabia Malaysia Source: Transport Intelligence, The Emerging Markets Logistics Index, March 2010

    Read the article

  • How to change from own Internal/Extrernal DNS to use an outsourced service like DNS Made Easy?

    - by Joakim
    Our current setup is a co-located linux box with an openvz kernel with a handful virtual containers for www, mail etc. and one container run Bind9 with a split views configuration serving External and Internal DNS. The HW-Node runs a shorewall firewall and all containers uses private ip's. The box (and DNS) basically handles web and mail for a handful domains and it works well but we still think it would be a good idea to outsource the public DNS and now to my question... Although I am fairly comfortable with the server stuff and DNS, I'm far from a pro and guess I basically need some confirmation that I am thinking in the right direction in that I basically just move the content of our external view (with zone files) to the external service and keep the internal view (or actually remove the view), update the new external DNS with thier names servers, update the info at my registrar and wait for propagation or have I missed something? Maybe someone else here run something similar already and can share some exteriences? I found this question which at least confirms it can be done.

    Read the article

  • Recommend a wireless PCI card for Windows 7

    - by Dan
    I have a crummy RT2500-based 11g card which does work in Win7 with the Vista driver (3.2.0.0) but it dies about every two hours or so. Googling around has led me to conclude that Ralink drivers are basically borked, and that I need something else for a stable connection. Can anyone recommend a suitable wireless adapter? It needs to be: 802.11g - draft-N nice but not at all essential. PCI - I already have far more USB devices than can possibly be good for me. Very reliable. Money isn't an object within reason. All input gratefully received!

    Read the article

  • What are good replacements for Microsoft Visio 2003?

    - by James
    I have been using Visio 2003 on Windows XP for generation of UML diagrams. I have encountered following problems so far: There is no way to generate/print the documentation written for class attributes/methods. No automatic code generation is supported I have already generated lot of diagrams and i discovered above problems at much later stage. Now i would like to overcome above by choosing another tool which is compatible with Visio file(.vsd) which saves time or redrawing all diagrams and also provides above features. Could you kindly suggest an alternative (visio compatible) tool ? (I have looked at a similar SU-question, but it does not suggest tools which provide solution to above problems. I am open to free as well as licensed tools, with priority to free :) ) Thanks, James

    Read the article

  • Apple XRaid questions

    - by luckytaxi
    I inherited an environment with a couple of apple xraid san. 1 - I have a 14 drive setup that's split into 5 LUNs on EACH side. The SAN goes into a fibre switch along w/ the servers that are attached to it. LUN masking is enabled on the SAN and as far as I know, there aren't any zoning on the fibre switch. Question, I have a server that's assigned two LUNs, one from each side of the controller. For some reason, it only sees one LUN (from the upper controller) and it doesn't see the one from the lower controller. The controller seems to be working fine as I have other servers attached to LUNs on the lower controller. 2 - I see a little "disclaimer" saying that any changes to the xraid will result in a reboot. So, if I add/remove hosts, this thing is going to reboot?!?!?!

    Read the article

  • Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates

    - by ETC
    The final round of IBM’s Watson against Ken Jenning and Brad Rutter ended last night with Watson coming out in a strong lead against its two human opponents. Read on to catch a video of the match and see just how quick Watson is on the draw. Watson tore through many of the answers, the little probability bar at the bottom of the screen denoting it was often 95%+ confident in its answers. Some of the more interesting stumbles were, like in the last matches, based on nuance. By far the biggest “What?” moment of the night, however, was when it answered the Daily Double question of “The New Yorker’s 1959 review of this said in its brevity and clarity, it is ‘unlike most such manuals, a book as well as a tool’”. Watson, inexplicably, answered “Dorothy Parker”. You can win them all, eh? Check out the video below to see Watson in action on its final day. Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • How do I get a Canon MG, MP and MX series USB printer working?

    - by Old-linux-fan
    Printer MP6150 driver installed itself upon plugging in the printer. Printer is recognized (lsusb shows it) but does not mount. If the printer is recognized, the driver must be working (or?), but something is blocking the system from mounting the printer. Tried the usual things: power of printer, restart Ubuntu etc. Listed below result of lsusb and fstab: hans@kontor-linux:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 04a9:174a Canon, Inc. Bus 002 Device 002: ID 1058:1001 Western Digital Technologies, Inc. External Hard Disk [Elements] Bus 004 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser hans@kontor-linux:~$ sudo cat /etc/fstab [sudo] password for hans: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=eaf3b38d-1c81-4de9-98d4-3834d674ff6e / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=93a667d3-6132-45b5-ad51-1f8a46c5b437 none swap sw 0 0 Here is what I have tried: Tried the HK link again, but no luck so far. However, I connected the printer wirelessly to router on other xp box. Installing the driver from ppa:michael-gruz/canon doesn't work, but the driver is installed

    Read the article

  • Mod Rewrite - directing HTTP/HTTPS traffic to the appropriate virtual hosts

    - by kce
    I have an Apache2 web server (v. 2.2.16) running on Debian hosting three virtual hosts. The first two hosts are HTTP only (server1 and server2). The last host is HTTPS only (server3). My virtual host configuration files can be found at pastebin. I would like to use mod rewrite to get the following behavior: Any request for http://server3 is re-directed to https://server3 Any request for either https://server1 or https://server2 is re-directed to http://server1 or http://server2 as appropriate. Currently, requesting http://server3 gives you a 403 because indexing is disabled for that host and a request for https://server1 or https://server2 will resolve as https://server3 (as its the only virtual host running SSL). This behavior is not desirable. So far I have added a rewrite rule to the central configuration file (myServerWideConfs.conf), with unfortunately no effect. I was under the impression that this rule (or something similar) should rewrite all https:// requests for server1 and server2 to the proper http:// request. RewriteEngine On RewriteCond %{HTTP_HOST} !^server3 [NC] RewriteRule (.*) http://%{HTTP_HOST} My question is two-fold: What mod rewrite rules should I use to accomplish this? And where should they go? Debian's packaging of Apache has a pretty granular (i.e., fractured) configuration file layout; should my rewrite rules go in /etc/apache2/apache2.conf, /etc/apache2/conf.d/myServerWideConfs.conf, or the individual virtual host files? Is mod rewrite the right tool to accomplish this or am I missing something in my greater apache configuration?

    Read the article

  • Naming conventions: camelCase versus underscore_case ? what are your thoughts about it? [closed]

    - by poelinca
    I've been using underscore_case for about 2 years and I recently switched to camelCase because of the new job (been using the later one for about 2 months and I still think underscore_case is better suited for large projects where there are alot of programmers involved, mainly because the code is easier to read). Now everybody at work uses camelCase because (so they say) the code looks more elegant . What are you're thoughts about camelCase or underscore_case p.s. please excuse my bad english Edit Some update first: platform used is PHP (but I'm not expecting strict PHP platform related answers , anybody can share their thoughts on which would be the best to use , that's why I came here in the first place) I use camelCase just as everibody else in the team (just as most of you recomend) we use Zend Framework which also recommends camelCase Some examples (related to PHP) : Codeigniter framework recommends underscore_case , and honestly the code is easier to read . ZF recomends camelCase and I'm not the only one who thinks ZF code is a tad harder to follow through. So my question would be rephrased: Let's take a case where you have the platform Foo which doesn't recommend any naming conventions and it's the team leader's choice to pick one. You are that team leader, why would you pick camelCase or why underscore_case? p.s. thanks everybody for the prompt answers so far

    Read the article

  • Trying to format drive fails

    - by david
    since I will be doing an internship for which i need to use Windows software, I have decided to ruin my day trying to remove my Ubuntu 12.04, install Win XP SP3 (since the DualBoot theme from ubuntu suggests to first install Windows and then Ubuntu, for problems with the bootloader if you do it the other way around) and then reinstall Ubuntu 12.04 since I would like to keep using it as my primary operating system, using WinXP exclusively for the internship. Other than that, I would like to have a partition for the data, which can be used by both Ubuntu and Windows. So now, I have used the disk utility run from an ubuntu-live cd to format my drive with Master Boot Record (being conscious of the fact that this way I will lose all my data, which I have saved on an external drive before, and that my Ubuntu won't work anymore afterwards), creating partitions for Windows (NTFS), personal data (FAT, since as far as I know both Ubuntu and Windows can deal with this), a Swap partition for Linux, and one partition for Ubuntu (ext4); trying to install Win XP from cd gives me a blue screen, which stops the setup and telling me to remove all recently installed drives and to run CHKDSK. So I thought, that maybe Windows doesn't like pre-partitioned drives for its installation and thus I need to re-format my hard drive in order to have a completely "new" drive, which I can then, during the Windows-installation, partition in order to create the partitions I need. Trying to do this, though, the disk-utility run from the live-CD gives me this warning: Error creating partition table: helper exited with exit code 1: In part_create_partition_table: device_file=/dev/sda, scheme=0 got it got disk committed to disk BLKRRPART ioctl failed for /dev/sda: Device or resource busy I do not understand why it tells me that the hard-drive is busy, because, as stated above, I am doing all this from a live-CD. Thus, my questions are: How can I resolve the error given by the disk utility? Does it make sense to use four partitions in the way mentioned above? And if not so, which partitions should I create? Can I, theoretically, partition my drive from an Ubuntu live-cd in order to create the partitions I want and to install first Windows and then Ubuntu? Thanks for any help, David

    Read the article

  • Oracle Enterprise Manager sessions on the last day of the Oracle Open World

    - by Anand Akela
    Hope you had a very productive Oracle Open World so far . Hopefully, many of you attended the customer appreciation event yesterday night at the Treasures Islands.   We still have many enterprise manager related sessions today on Thursday, last day of Oracle Open World 2012. Download the Oracle Enterprise Manager 12c OpenWorld schedule (PDF) Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) Time Title Location 11:15 AM - 12:15 PM Application Performance Matters: Oracle Real User Experience Insight Palace Hotel - Sea Cliff 11:15 AM - 12:15 PM Advanced Management of JD Edwards EnterpriseOne with Oracle Enterprise Manager InterContinental - Grand Ballroom B 11:15 AM - 12:15 PM Spark on SPARC Servers: Enterprise-Class IaaS with Oracle Enterprise Manager 12c Moscone West - 3018 11:15 AM - 12:15 PM Pinpoint Production Applications’ Performance Bottlenecks by Using JVM Diagnostics Marriott Marquis - Golden Gate C3 11:15 AM - 12:15 PM Bringing Order to the Masses: Scalable Monitoring with Oracle Enterprise Manager 12c Moscone West - 3020 12:45 PM - 1:45 PM Improving the Performance of Oracle E-Business Suite Applications: Tips from a DBA’s Diary Moscone West - 2018 12:45 PM - 1:45 PM Advanced Management of Oracle PeopleSoft with Oracle Enterprise Manager Moscone West - 3009 12:45 PM - 1:45 PM Managing Sun Servers and Oracle Engineered Systems with Oracle Enterprise Manager Moscone West - 2000 12:45 PM - 1:45 PM Strategies for Configuring Oracle Enterprise Manager 12c in a Secure IT Environment Moscone West - 3018 12:45 PM - 1:45 PM Using Oracle Enterprise Manager 12c to Control Operational Costs Moscone South - 308 2:15 PM - 3:15 PM My Oracle Support: The Proactive 24/7 Assistant for Your Oracle Installations Moscone West - 3018 2:15 PM - 3:15 PM Functional and Load Testing Tips and Techniques for Advanced Testers Moscone South - 307 2:15 PM - 3:15 PM Oracle Enterprise Manager Deployment Best Practices Moscone South - 104 Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Where do I place XNA content pipeline references?

    - by Zabby Wabby
    I am relatively new to XNA, and have started to delve into the use of the content pipeline. I have already figured out that tricky issue of adding a game library containing classes for any type of .xml file I want to read. Here's the issue. I am trying to handle the reading of all XML content through use of an XMLHandler object that uses the intermediate deserializer. Any time reading of such data is required, the appropriate method within this object would be called. So, as a simple example, something like this would occur when a character levels: public Spell LevelUp(int levelAchived) { XMLHandler.FindSkillsForLevel(levelAchived); } This method would then read the proper .xml file, sending back the spell for the character to learn. However, the XMLHandler is having issues even being created. I cannot get it to use the using namespace of Microsoft.Xna.Framework.Content.Pipeline. I get an error on my using statement in the XMLHandler class: using Microsoft.Xna.Framework.Content.Pipeline.Serialization.Intermediate; The error is a typical reference error: Type or namespace name "'Pipeline' does not exist in the namespace 'Microsoft.Xna.Framework.Content' (are you missing an assembly reference?)" I THINK this is because this namespace is already referenced in my game's content. I would really have no issue placing this object within my game's content (since that is ALL it deals with anyways), but the Content project does not seem capable of holding anything but content files. In summary, I need to use the Intermediate Deserializer in my main project's logic, but, as far as I can make out, I can't safely reference the associated namespace for it outside of the game's content. I'm not a terribly well-versed programmer, so I may be just missing some big detail I've never learned here. How can I make this object accessible for all projects within the solution? I will gladly post more information if needed!

    Read the article

  • sqlcmd backup script failing

    - by Bryan
    I'm trying to use a simple batch script to backup a local instance of SQL Express 2012, as follows: @echo off SET BACKUP_DIR=E:\BackupData SET SERVER=.\\sqlexpress set dom=%date:~0,2% set month=%date:~3,2% set year=%date:~6,4% set file=%year%-%month%-%dom% sqlcmd -S %SERVER% -d master -Q "exec sp_msforeachdb 'BACKUP DATABASE [?] TO DISK=''%BACKUP_DIR%\?.Full.%file%.bak''' The script is failing to run with the following error: Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Client unable to establish connection due to prelogin failure. This is on Server 2008 R2, my SQL database (on localhost) instance is named SQLEXPRESS. There is an instance of SQL Express 2008 on the system (hence client 10.0). The database is configured to use a trusted connection, and the .net desktop software deployed on our network PCs is able to access the database without any problem. Am I missing something obvious here, I've done a fair amount of searching for this error message, and haven't found anything that has been particularly useful so far.

    Read the article

  • SQLAuthority News – Story of Seattle – SQLPASS 2011 Event Log

    - by pinaldave
    Just like every year I attended SQL PASS in Seattle earlier this month. The event was scheduled from Oct 11-14, 2011 in the convention center of the Seattle. I have been to Seattle more than 6 times so far so it is not a new city for me anymore. The city has always impressed me with its vibrant life and pleasant weather. Just like every other time, I had excellent experience once again in the city. Though I just arrived on the day of the event and left right after the event was over – I hardly visited Seattle – still some good experience to share. Here are few quick photographs from my quick trip of Seattle city. Skyline of Seattle Seattle Convention Center A Shop Tenzing Momo and Co at Pike St Market The Seattle Gum Wall Shoreline in Seattle Nigel and Paras First Starbucks (Relocated) People on Street of Seattle Food at Sandy’s – All Veg Well, this is a short summary of my extremely quick city tour of Seattle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • How to create a readonly root linux: Can be mounted as writeable for persistent changes?

    - by Mr Anderson
    I'd like a read only file system that runs almost entirely in RAM but the compact flash or hardrive can be mounted and made writeable to make persistent changes. How do I do this on Linux? I've looked at several tutorials but none really explain how to create such a system with the option of being able to mount the storage device and make persistent changes. I looked at this so far: http://chschneider.eu/linux/thin_client/ I also looked on the old gentoo wiki but the article was very specific to Gentoo. I'll be using a debian based Linux but it would be nice I've someone could explain to me how to do this in pretty generic instructions ,that would work on any Linux distro. Thanks.

    Read the article

  • CPanel - Wild card SSL - How to point *.domain.com to one root and sub.domain.com to another root

    - by Harry Muscle
    I have a wildcard (*.domain.com) SSL certificate installed on my CPanel server. I have domain.com configured to point to /domain.com as its document root and use this wildcard SSL certificate. I also have sub.domain.com configured to point to /sub.domain.com as its document root. Btw, I have not explicitly configured configured sub.domain.com to use the wildcard SSL certificate. When I go to "http://sub.domain.com" it goes to the correct document root, however my problem is that when I go to "https://sub.domain.com" it goes to the incorrect root, it goes to the root configured for the wildcard SSL. I've been trying to find information on how to go about configuring sub.domain.com to use the SSL certificate and go to the correct document root, however, so far I haven't found anything concrete. Do I use the same steps that I used for configuring the certificate for domain.com, but use the same certificate again and specify dev.domain.com as the domain that this certificate is for (instead of *.domain.com)? Or is there something else I should be doing? This is a production server, so I don't want to play around too much. I'm hoping to find the correct information before proceeding.

    Read the article

  • How do I disable Tomcat? [closed]

    - by Dave King Popeye Mason
    Possible Duplicate: How to disable Tomcat on linux? According to my server host, something called Tomcat is hogging all my resources and slowing down the server. As far as I'm aware I'm not using it as the only thing running on the server is Plesk and a few Wordpress installations. I'm a real dummy at using SSH, I can navigate to folders, change permissions and that's about it. Could somebody explain to me as if I'm a 5 year old how to disable TomCat (and also how to stop it re-enabling on startup)? Thanks!

    Read the article

  • TechEd 2012 - last session

    - by Stefan Barrett
    Nearly over. For last session attending a talk on c++ apps in windows metro. When I came to TechEd I didn't think I would attend so many sessions on c++, but somehow this has proved more interesting this year. While .net 4.5 is interesting, been playing with it for a while now, so not a ton of surprises. Of course, i still want it at work, but who knows how long that will take - so will just have to use it at home. Once I get the licensing sorted out. So expensive. I've been pleasantly surprised with windows 8, and will be trying that out, and at least that is covered on technet. While the weather has not always been perfect during TechEd's, this is the worst I've seen it so far - it's wet outside. Next years conference in new Orleans should be interesting, well out of the conference itself that is. I do like conferences where it's held within the city itself, unlike Orlando where there isn't really anything here.

    Read the article

  • Hyper-V core NIC speeds and registry changes

    - by gary
    Good afternoon, On a Dell PE T610 I have Hyper-V core running, with 2 x Broadcom BCM5709C NetXtreme II GigE installed. I have noticed that copying large files 17GB for example, from a network physical server to the Hyper-V host local drive [not vm guest] is very slow in comparison to copying from Physical to Physical servers. Copying a 17GB file physical to Hyper-V host takes 30 minutes Copying a 17GB file physical to physical host takes 15 minutes Can someone tell me exactly what registry nodes I should disable on Hyper-V NICs to improve performance. So far I have gone to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class{4 D36E972-E325-11CE-BFC1-08002BE10318} and set the following to 0 on both physical NICs: *LSOv1IPv4 *LSOv2IPv6 *TCPUDPChecksumOffloadIPv4 *TCPUDPChecksumOffloadIPv6 Should I also disable *TCPConnectionOffloadIPv4 & *TCPConnectionOffloadIPv6? Many thanks in advance

    Read the article

  • What should you bring to the table as a Software Architect?

    - by Ahmad Mageed
    There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE. I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the overall design of a project, gets involved with coding efforts, conducts code reviews, and selects the technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well.

    Read the article

  • Perl script and out of memory errors

    - by Kevin
    We have a midsized server with 48GB of RAM and are attempting to import a list of around 100,000 opt-in email subscribers to a new list management system written in Perl. From my understanding, Perl doesn't have imposed memory limits like PHP, and yet we are continuously getting internal server errors when attempting to do the import. When investigating the error logs, we see that the script ran out of memory. Since perl doesn't have a setting to limit the memory usage (as far as I can tell) why are we getting these errors? I doubt a small import like this is consuming 48GB of ram. We have compromised and split the list into chunks of 10,000, but would like to figure out the root cause for future fixes. This is a CentOS machine with Litespeed as the web server.

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >