Search Results

Search found 24073 results on 963 pages for 'mount point'.

Page 473/963 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • Best way to bring a system down with a "maintenance" message?

    - by iftrue
    What's the best way to bring down an apache2/tomcat6 setup for maintenance? Specifically, apache2 can stay running, but tomcat needs to restart to accomplish a number of tasks. My initial thought is to change the root directory in the httpd.conf VirtualHost entry to point to a new location, then issue a force-reload command to direct traffic away from the actual tomcat application. After some period of time, I perform tomcat maintenance, switch the VirtualHost entry, and force-reload to begin directing traffic back. Is there a better way to do this? I'm looking to start work on a rather extensive web application, and my deployment procedure right now involves shutting everything down and bringing everything back up. Is there a better way to do this than what I've proposed?

    Read the article

  • Unavailable packages repository

    - by bitmask
    I'm running ubuntu 11.10 (oneiric) on this machine, and suddenly, apt is unable to update properly. If I ask it to update its package information, by running apt-get update (or alternatively telling the update manager to "check"), it succeeds for about 120 packages (more precisely, I get about 120 Ign/Hit notes) and then says it cannot find universe Sources and restricted amd64: Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric/universe Sources 404 Not Found [IP: 141.30.13.20 80] Err http://de.archive.ubuntu.com oneiric/restricted amd64 Packages 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/source/Sources 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/restricted/binary-amd64/Packages 404 Not Found [IP: 141.30.13.20 80] E: Some index files failed to download. They have been ignored, or old ones used instead. I manually checked the de server and cannot find anything wrong with the stuff it's complaining about. Also it looks pretty much like, say, the us mirror. But oddly enough, the IP it lists, seems to point to a debian package server, which obviously does not contain ubuntu packages. So, is this a local problem that I can fix somehow (and if so, how?) or is there actually some server down right now?

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • Positioning a sprite in XNA: Use ClientBounds or BackBuffer?

    - by Martin Andersson
    I'm reading a book called "Learning XNA 4.0" written by Aaron Reed. Throughout most of the chapters, whenever he calculates the position of a sprite to use in his call to SpriteBatch.Draw, he uses Window.ClientBounds.Width and Window.ClientBounds.Height. But then all of a sudden, on page 108, he uses PresentationParameters.BackBufferWidth and PresentationParameters.BackBufferHeight instead. I think I understand what the Back Buffer and the Client Bounds are and the difference between those two (or perhaps not?). But I'm mighty confused about when I should use one or the other when it comes to positioning sprites. The author uses for the most part Client Bounds both for checking whenever a moving sprite is of the screen and to find a spawn point for new sprites. However, he seems to make two exceptions from this pattern in his book. The first time is when he wants some animated sprites to "move in" and cross the screen from one side to another (page 108 as mentioned). The second and last time is when he positions a texture to work as a button in the lower right corner of a Windows Phone 7 screen (page 379). Anyone got an idea? I shall provide some context if it is of any help. Here's how he usually calls SpriteBatch.Draw (code example from where he positions a sprite in the middle of the screen [page 35]): spriteBatch.Draw(texture, new Vector2( (Window.ClientBounds.Width / 2) - (texture.Width / 2), (Window.ClientBounds.Height / 2) - (texture.Height / 2)), null, Color.White, 0, Vector2.Zero, 1, SpriteEffects.None, 0); And here is the first case of four possible in a switch statement that will set the position of soon to be spawned moving sprites, this position will later be used in the SpriteBatch.Draw call (page 108): // Randomly choose which side of the screen to place enemy, // then randomly create a position along that side of the screen // and randomly choose a speed for the enemy switch (((Game1)Game).rnd.Next(4)) { case 0: // LEFT to RIGHT position = new Vector2( -frameSize.X, ((Game1)Game).rnd.Next(0, Game.GraphicsDevice.PresentationParameters.BackBufferHeight - frameSize.Y)); speed = new Vector2(((Game1)Game).rnd.Next( enemyMinSpeed, enemyMaxSpeed), 0); break;

    Read the article

  • Forum engine with full LDAP integration [closed]

    - by Andrian Nord
    We are looking for forum engine which may actually maintain user data into LDAP, maybe via mods. Core point is about ability to maintain the data, i.e. all user profile settings, like nickname, password, email, avatar, birthday and others (preferably configurable). One example of good ldap integration, level of which I'm expecting, is drupal's ldap integration, which allows to map any user's attribute into ldap and keeps it in sync with database. Year ago I've done a small research over existing Free&FOSS engines and find out few forum engines with LDAP integration, namely SFM, phpBB and something else. The most maintained solution were provided by phpBB3, which supports LDAP integration out-of-box, but it is unable to sync data with changes in LDAP server made by other software. Actually it wasn't even propagating changes back, I'm not saying about ability to map additional attributes (other than name/password/email). Also, I haven't found any forum with architecture which have proper abstraction over user settings, thus I doubt that this engines (including phpBB) are possible to mod such functionality without introducing dramatic changes into core codebase. More recent research showed that even some commercial software, like IPB is unable to keep it's database synced with LDAP directory and map additional attributes. In other words, all support I've seen so far is simple user creation upon first user's login, which is not good for us, as forum is not primary site and should not maintain it's own users base (to reduce risk of possible collisions). LDAP import is required due to many other services (ftp, email, jabber, drupal site) using same users base. Currently we have forum embedded into Drupal site, but we are unsatisfied with it's features. BTW, we are using Linux and this is not duplicate of this question, as it's author seems to be satisfied with behaviour described above. So, my question is: Are there any (preferably FOSS&free) forum engines that may import, export, keep in sync, or otherwise integrade with LDAP user database (preferably with ability to map additional fields to ldap attributes)?

    Read the article

  • compTIA-AT EXAM

    - by SysPrep2010
    Hello everyone, I have been in the IT field only for two years. I have been dealing with servers, firewalls, routers, switches, backup servers, and desktop. For the desktop, i have been dealing with WDS (Window deployment services). Not a lot of hardware. My question is this, is it really important to have an AT cert under your belt. I dont see the point anymore. When a desktop goes down, what have been seeing, they just buy a new one. I mean I can rebuild systems they are fun, but I haven't in a while?

    Read the article

  • post-receive hook permission denied "unable to create file" error

    - by ThomasReggi
    Just got gitolite installed on my webserver and am trying to get a post-receive hook that can point the git dir in apache's direction. This is what my post-receive hook looks like. Got this script from the Using Git to manage a web site. #!/bin/sh echo "post-receive example.com triggered" GIT_WORK_TREE=/srv/sites/example.com/public git checkout -f This is the error response i'm getting back from git push origin master from my local workstation. These are files from within my repository. remote: post-receive example.com triggered remote: error: unable to create file .htaccess (Permission denied) remote: error: unable to create file .tm_sync.config (Permission denied) remote: fatal: cannot create directory at 'application': Permission denied Permissions of public. drwxr-xr-x 5 root root 4096 Jun 26 17:23 public

    Read the article

  • Changing Servers - Redirect to new IP = No Downtime?

    - by Denis Pshenov
    I am changing servers of my website. The IP of old server cannot be moved to the new one. To have no downtime I am planing to do the following, please someone confirm it will work: Setup the new server and listen on the new IP Old server redirect all traffic to the new IP Change DNS records to point to the new IP My logic tells me that when I redirect to the new IP from my old box, the user will not see the domain name in the browser but will see the new IP. Is there a way to redirect to the new IP and send along the HOSTNAME with it so that the user will see the domain name in the browser? Im doing this because the site is in constant use and simply changing DNS settings won't do as database won't be synced between the new and old servers during propagation.

    Read the article

  • Oracle RAC interconnect in a Dell M1000e Blade Enclosure

    - by Antitribu
    We are looking at a Dell M1000e enclosure and appropriate Blades with 4 NICs each. We are planning on running Linux/Oracle 11g RAC on two blades, storage will be handled on an iSCSI SAN for which two NICs (via passthrough) will be connected leaving us with two NICs (via blade centre switches). We would like to have an interconnect (obviously) , an external IP and an internal IP. Would best practice be to: bond the remaining two interfaces and VLAN as appropriate to provide three virtual interfaces? run the interconnect on one interface and VLAN the external/internal interfaces? purchase a blade with more NICs as the above is a terrible idea? Another option? Please feel free to point out the blindingly obvious or to relevant documentation on support.oracle. I am specifically interested in supported configurations and best practices. Thanks!

    Read the article

  • How to I access "Deny" message from a Lidgren client?

    - by TJ Mott
    I'm using the Lidgren v3 network for a UDP client/server networking model. On the server end, I'm initializing a NetServer object with the NetIncomingMessage.ConnectionApproval message type enabled. So the client is able to successfully connect and the first packet it sends is a login packet, containing a username and password supplied by the user. The server is receiving that and doing some black magic to authenticate, and everything works up to that point. If the login fails, the server calling NetIncomingMessage.SenderConnection.Deny("Invalid Login Credentials"). I want to know how to properly receive this deny message on the client. I'm getting the message, it shows up with a message type of NetIncomingMessage.StatusChanged. If I call ReadString on that message, I get a corrupted version of the string I passed to the Deny method on the server. The type of corruption varies, I've seen odd characters in there but in every case it's truncated and is way shorter than the string I entered. Any ideas? The official documentation is sparse on this topic. I could use pointers from anyone who has successfully used the Lidgren library and uses the Accept or Deny methods. Also, if I don't do any authentication and just Approve() the connection every time, stuff actually works just fine and I'm getting reliable two-way UDP traffic. (And lastly, Stack Exchange said I don't have enough reputation to use the "Lidgren" tag....???)

    Read the article

  • How to setup certificate authentication for MS SQL server 2008 R2 ?

    - by Stephane
    Hello, I have to connect an (ADO) application running on a standalone Windows 2003 R2 server to a SQL 2008 R2 database that is a member of the domain. I have setup an SQL authentication account for this and hard-coded the password into the connection string but I wonder if it wouldn't be possible to use certificate-based authentication for this instead. I haven't been able to find any documentation regarding this apparently new functionality of SQL 2008 R2 anywhere. Could someone kindly point me at some good documentation ? Or at least a description of the functionality and whether it could be used in my case or not ? Thank you in advance

    Read the article

  • Belking Wireless Router unable to connect to internet although wireless connection is working

    - by ptamzz
    I have a Belkin Basic N150 Wireless router. I'm trying to set up wireless connection using the wired ports my university has provided in my hostel room. Usually, when I connect my laptop using a LAN wire through the port, my settings are like IP: 10.5.130.X Subnet Mask: 255.255.254.0 Default Gateway: 10.5.130.250 DNS Server: 10.200.1.11 and I'm able to connect to the internet. Now instead of connecting my laptop directly, I've connected the lan wire to the Belkin Wireless Router, set the router as "Use as an Access Point" and in the IP field, I've put up 10.5.130.1. Now I've set the IP of my system manually to 10.5.130.3. I'm able to connect to the wi-fi but I'm still not able to connect to the internet. What am I missing?

    Read the article

  • Is it possible to render and style a <title> element from within the <head> of an html document?

    - by Brian Z
    Is it possible to render and style a <title> element from within the <head> of an html document? I thought it was impossible to render information from the <head>, but the system status page for 37signals.com seems to be doing just that - http://status.37signals.com/. If you inspect the element at the very top of the page, the text that reads "37signals System Status", you'll see that the part of the DOM that is generating the text is the <head>'s <title>, and the css is as follows: title { display: block; margin: 10px auto; max-width: 840px; width: 100%; padding: 0 20px; float: left; color: black; text-rendering: optimizelegibility; -moz-box-sizing: border-box; box-sizing: border-box; } Can someone confirm that the <title> info from the <head> is indeed what is being rendered? If so, can someone point to documentation that defines this capability as I have not found any? I have applied the above css to an html document on my local web server using the same browser (chromium, os x 10.8.5) as the 37signals site was viewed on, yet my file did not display the <head>'s <title>.

    Read the article

  • New partnership allows auto-transposition of client/server application to Windows Azure

    - by Webgui
    The economics of IT is changing rapidly, and organizations are searching to widen and secure availability of their systems and at the same time lower costs which is exactly what the cloud meant to do. Running your systems on Microsoft’s Windows Azure cloud for example would improve and secure the availability, accessibility and scalability (both up and down) of your systems and support the new IT economics. However, in order to take advantage of the cloud's promise of lower cost of ownership, the applications must be built or adjusted to work on that platform and in most cases this is not a simple task.  Even existing web applications cannot always be transferred to Azure without some changes, and for client/server applications, the task is way more challenging even to the point where it seems impossible. The reason is the gaps between the client/server desktop technology and the cloud's. For that reason, most of the known methodologies to migrate existing client/server applications actually involve rewrite of the desktop systems for the cloud. A unique approach is introduced by Visual WebGui which creates a virtualization layer atop ASP.Net web server, it moves the transformed or generated .Net code to that layer, and then using a patent pending protocol it renders a user interface within a plain browser. The end result is pure .NET code that is a base code for a pure rich web application and now due to a collaboration with Microsoft Windows Azure Visual WebGui provides the shortest path from client/server to the Azure cloud by being able to handle close to 95% of the transformation to the cloud platform in an automatic way. Application Migration to Azure without migraines More information about the Instant CloudMove Azure solution here.

    Read the article

  • Network printer - Print direct or via shared printer on Server?

    - by NickC
    It has occurred to me that a workstation can connect to a printer in two ways: 1). Printing directly to the IP of the printer with the print driver installed locally. 2). Printing to a \Server\Printer1 share with the print queue residing on the server. Question is which way is preferred? I would assume that printing directly to a network printer rather than going through the server would be the most efficient from the point of view of network traffic. On the other hand I guess a server printer share would be easier to manage with the correct driver automatically being downloaded to the workstations. Also what about using GPP (Server2012) to install this printer on the workstations, does that require any specific way?

    Read the article

  • Illumination and Shading for computer graphics class

    - by Sam I Am
    I am preparing for my test tomorrow and this is one of the practice questions. I solved it partially but I am confused with the rest. Here is the problem: Consider a gray world with no ambient and specular lighting ( only diffuse lighting). The screen coordinates of a triangle P1,P2,P3, are P1=(100,100), P2= (300,150), P3 = (200, 200). The gray values at P!,P2,P3 are 1/2, 3/4, and 1/4 respectively. The light is at infinity and its direction and gray color are (1,1,1) and 1.0 respectively. The coefficients of diffused reflection is 1/2. The normals of P1,P2,P3 are N1= (0,0,1), N2 = (1,0,0), and N3 = (0,1,0) respectively. Consider the coordinates of three points P1,P2,P3 to be 0. Do not normalize the normals. I have computed that the illumination at the 3 vertices P1,P2,P3 is (1/4,3/8,1/8). Also I computed that interpolation coefficients of a point P inside the triangle whose coordinates are (220, 160) are given by (1/5,2/5,2/5). Now I have 4 more questions regarding this problem. 1) The illumination at P using Gouraud Shading is: i) 1/2 The answer is 1/2, but I have no idea how to compute it.. 2) The interpolated normal at P is given by i) (2/5, 2/5,1/5) ii) (1/2, 1/4, 1/4) iii) (3/5, 1/5, 1/5) 3) The interpolated color at P is given by: i) 1/2 Again, I know the correct answer but no idea how to solve it 4) The illumination at P using Phong Shading is i) 1/4 ii) 9/40 iii) 1/2

    Read the article

  • What could be causing Windows to randomly reset the system time to a random time?

    - by Jonathan Dumaine
    My Windows 7 machine infuriates me. It cannot hold a date. At one point it all worked fine, but now it will decide that it needs to change the system time to a random time and date, either in the future or past. There seems to be no correlation or set interval of when it happens. In attempt to remedy this, I have: Correctly set the time in BIOS. Replaced the motherboard battery with a new CR2032 (even checked it with a multimeter). Tried disabling automatic internet synchronizing via "Date and Time" dialog. Stopped, restarted, left disabled the Windows Time service. Yet with all of these actions, the time will continue to change. Any ideas?

    Read the article

  • Cannot install SQL Server (2012) PowerPivot for SharePoint, always fails Sharepoint Version check

    - by ProfessionalAmateur
    Trying to install a fresh install of Sharepoint 2010 (w/ SP1) and SQL Server 2012 PowerPivot for Sharepoint. The prerequisites clearly show that Sharepoint 2010 SP1 is needed, which we have installed. However after when trying to install the SQL Server portion we consistently fail the rule SharePoint version requirement for PowerPivot for SharePoint' validation in theSQL Server` install process. Here is the process we are following: 1. install Sharepoint 2010 2. install Sharepoint 2010 SP1 3. install SQL Server 2012 PowerPivot for SharePoint Here is a screen shot of the error and the log file error. We are completely stuck at this point, anyone run into this before?

    Read the article

  • Plesk on Windows Server 2008 memory usage

    - by Thomas
    I have a server running IIS and Plesk. There is a problem every day now in which the memory usage of PleskControlPanel.exe and w3wp.exe slowly increases. And at some point all of the sites hosted on the server go down. Restarting Plesk through its panel will reset its memory usage for a while, and restarting IIS will make it last even longer, but there is still the constant memory climbing problem. I cannot see anything by looking through the event viewer. Has anyone ever seen anything like this with Plesk on Windows? Thanks for any help.

    Read the article

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • Migrating Forms to Java or ADF, the truth and no FUD

    - by Grant Ronald
    The question about migrating Forms to Java (or ADF or APEX) comes up time and time again.  I wanted to pull some core information together in a single blog post to address this question. The first question I always ask is "WHY" - Forms may still be a viable option for you so "if it ain't broke don't fix it".  Bottom line is whatever anyone tells you, its going to be a considerable effort and cost to migrate from Forms to something else so the business is going to want to know WHY you spend all those hard earned dollars switching from something that might have been serving you quite adequately. Second point, if you are going to switch, I would encourage you NOT to look at building a Forms clone.  So many times I see people trying to build an ADF application and EXACTLY mimic the Forms model - ADF is NOT a Forms clone.  You should be building to the sweet spot of your target technology, not your 20 year old client/server technology.  This is also the chance for the business to embrace change, so maybe look at new processes, channels and technology options that weren't available when you first developed your Forms applications. To help you understand what is involved, I've put together a number of resources. Thinking about migration of Forms to Java, ADF or APEX, read this to prepare yourself Oracle Forms to ADF: When, Why and How - this gives you an overview of our vision, directly from Oracle Product Management Redeveloping a Forms Application with Oracle JDeveloper and Oracle ADF.  This is a conference session from myself and Lynn Munsinger on how ADF can be used in a Forms migration/rewrite As someone who manages both Forms and ADF Product Management teams, I've a foot in either camp and am happy to see you use either tool.  However, I want you to be able to make an informed decision.  My hope is that there information sources will help you do that.

    Read the article

  • Is there a way to create a consistent snapshot/SnapMirror across multiple volumes?

    - by Tomer Gabel
    We use a NetApp FAS 6-series filer with an application that spans multiple volumes. For backup purposes I would like to create a consistent snapshot that spans these volumes at the same point in time (or at least with an extremely low delta); additionally, we'd like to to use SnapMirror to replace the production environment to test volumes. The problem is in creating a consistent snapshot/SnapMirror, since these commands are not transactional and do not take multiple parameters. I tried scripting consecutive "snap create" or "snapmirror resync" commands via SSH, but there's always a 0.5-2 second difference between each snapshot. It's currently "good enough", but I'm seriously concerned about the consistency impact with increased load (we're currently in pre-production). Has anyone managed to create a consistent snapshot that spans several volumes? How did you pull it off?

    Read the article

  • SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

    - by pinaldave
    No! It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue. It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting. Here is how you can do it. Select the corner between A and 1 and Right Click on it. It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells… It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text. Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS. Solved! Any other trick you do you know to preserve the decimal values? Leave a comment please. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Excel

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >