Search Results

Search found 21853 results on 875 pages for 'point'.

Page 380/875 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • New partnership allows auto-transposition of client/server application to Windows Azure

    - by Webgui
    The economics of IT is changing rapidly, and organizations are searching to widen and secure availability of their systems and at the same time lower costs which is exactly what the cloud meant to do. Running your systems on Microsoft’s Windows Azure cloud for example would improve and secure the availability, accessibility and scalability (both up and down) of your systems and support the new IT economics. However, in order to take advantage of the cloud's promise of lower cost of ownership, the applications must be built or adjusted to work on that platform and in most cases this is not a simple task.  Even existing web applications cannot always be transferred to Azure without some changes, and for client/server applications, the task is way more challenging even to the point where it seems impossible. The reason is the gaps between the client/server desktop technology and the cloud's. For that reason, most of the known methodologies to migrate existing client/server applications actually involve rewrite of the desktop systems for the cloud. A unique approach is introduced by Visual WebGui which creates a virtualization layer atop ASP.Net web server, it moves the transformed or generated .Net code to that layer, and then using a patent pending protocol it renders a user interface within a plain browser. The end result is pure .NET code that is a base code for a pure rich web application and now due to a collaboration with Microsoft Windows Azure Visual WebGui provides the shortest path from client/server to the Azure cloud by being able to handle close to 95% of the transformation to the cloud platform in an automatic way. Application Migration to Azure without migraines More information about the Instant CloudMove Azure solution here.

    Read the article

  • post-receive hook permission denied "unable to create file" error

    - by ThomasReggi
    Just got gitolite installed on my webserver and am trying to get a post-receive hook that can point the git dir in apache's direction. This is what my post-receive hook looks like. Got this script from the Using Git to manage a web site. #!/bin/sh echo "post-receive example.com triggered" GIT_WORK_TREE=/srv/sites/example.com/public git checkout -f This is the error response i'm getting back from git push origin master from my local workstation. These are files from within my repository. remote: post-receive example.com triggered remote: error: unable to create file .htaccess (Permission denied) remote: error: unable to create file .tm_sync.config (Permission denied) remote: fatal: cannot create directory at 'application': Permission denied Permissions of public. drwxr-xr-x 5 root root 4096 Jun 26 17:23 public

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Why are my 32bit OpenGL libraries pointing to mesa instead of nvidia, and how do I fix it?

    - by Codemonkey
    I have installed Nvidia's drivers on my Ubuntu 13 system, but according to this command (ldconfig -p | grep GL): $ ldconfig -p | grep GL libQtOpenGL.so.4 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libQtOpenGL.so.4 libGLU.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLU.so.1 libGLEWmx.so.1.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLEWmx.so.1.8 libGLEW.so.1.8 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGLEW.so.1.8 libGLESv2.so.2 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa-egl/libGLESv2.so.2 libGL.so.1 (libc6,x86-64) => /usr/lib/libGL.so.1 libGL.so.1 (libc6) => /usr/lib/i386-linux-gnu/mesa/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/libGL.so libEGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa-egl/libEGL.so.1 The 32bit version of OpenGL is pointing to mesa's libraries instead of nvidia. This causes my Steam games to refuse to launch with the error: Could not find required OpenGL entry point 'glGetError'! Either your video card is unsupported, or your OpenGL driver needs to be updated. Why is this the case? When the nvidia installer asked me if I wanted to install "32bit compatability libraries" (or something like that) I chose yes. How do I fix this? Edit: I just reinstalled the same Nvidia driver, and that apparently removed the 32bit OpenGL driver completely: $ ldconfig -p | grep libGL.so libGL.so.1 (libc6,x86-64) => /usr/lib/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/libGL.so Now Steam won't start: You are missing the following 32-bit libraries, and Steam may not run: libGL.so.1 Again, I chose YES when the installer asked me if I wanted to install 32bit libraries. Why are they not installed!?

    Read the article

  • The simplest Ubuntu mail server

    - by John G.
    After days of trying all sorts of tutorials I finally found a simple solution (not necessary the best) for a functional ubuntu mail server: sudo aptitude install postfix next type sudo dpkg-reconfigure postfix and configure like this: Internet Site yourdomain.com john (type your ubuntu user) yourdomain.com, localhost.localdomain, localhost No 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/192.168.0.1/24 (192.198.0.1 replace with your server ip address) 0 + all next install mail-stack-delivery sudo aptitude install mail-stack-delivery At this point you have a working mail server. Next, I configured SquirrelMail and start sending and receaving mail. This configuration worked with both Apache and Nginx.

    Read the article

  • Installing Sharepoint 2007 on Windows Server 2008 R2

    - by ronischuetz
    we are using in our lab a clean installation of Windows Server 2008 R2 which is running as Hyper-V instance. Today we wanted to install a clean installation of Sharepoint 2007 with SP1 on this machine and we explorer an error that we are not able to install it. The setup is comes up with an error which is described here: http://support.microsoft.com/kb/962935 but this is not our szenario. A printscreen of this message can be found here: http://www.ronischuetz.com/images/SP_2007SP1_Inst_Error.png @Microsoft, my personal point of view is that it cannot be that we need to install first 2008, then Sharepoint 2007 with SP1, then SP2 and upgrade to 2008 R2. Nobody is going be happy with this solution and I hope we find a fast way how customers can install Sharepoint direclty on 2008 R2. Anyway, thanks in advance for any further information how we should go on with this issue. The issue is listed here: http://social.msdn.microsoft.com/Forums/en-US/sharepointadmin/thread/91a6be50-9009-43c8-a37c-66cfb83d738f

    Read the article

  • What could be causing Windows to randomly reset the system time to a random time?

    - by Jonathan Dumaine
    My Windows 7 machine infuriates me. It cannot hold a date. At one point it all worked fine, but now it will decide that it needs to change the system time to a random time and date, either in the future or past. There seems to be no correlation or set interval of when it happens. In attempt to remedy this, I have: Correctly set the time in BIOS. Replaced the motherboard battery with a new CR2032 (even checked it with a multimeter). Tried disabling automatic internet synchronizing via "Date and Time" dialog. Stopped, restarted, left disabled the Windows Time service. Yet with all of these actions, the time will continue to change. Any ideas?

    Read the article

  • Is it common to lie in job ads regarding the technologies in use?

    - by Desolate Planet
    Wanted: Experienced Delphi programmer to maintain ginormous legacy application and assist in migration to C# Later on, as the new hire settles into his role... "Oh, that C# migration? Yeah, we'd love to do that. But management is dead-set against it. Good thing you love Pascal, eh?" I've noticed quite a lot of this where I live (Scotland) and I'm not sure how common this is across IT: a company is using a legacy technology and they know that most developers will avoid them to keep mainstream technology on their resumes. So, they will put out a advertisement saying they are looking to move their product to some hip new tech (C#, Ruby, FORTRAN 99) and require someone who has exposure to both - but the migration is just a carrot on a stick, perpetually hung in front of the hungry developer as he spends each day maintaining the legacy app. I've experienced this myself, and heard far too many similar stories to the point where it seems like common practice. I've learned over time that every company has legacy problems of some sort, but I fail to see why they can't be honest about it. It should be common sense to any developer that the technology in place is there to support the business and not the other way round. Unless the technology is hurting the business in someway, I hardly see any just cause for reworking the software stack to be made up whatever is currently vogue in the industry. Would you say that this is commonplace? If so, how can I detect these kinds of leading advertisements beforehand?

    Read the article

  • Network printer - Print direct or via shared printer on Server?

    - by NickC
    It has occurred to me that a workstation can connect to a printer in two ways: 1). Printing directly to the IP of the printer with the print driver installed locally. 2). Printing to a \Server\Printer1 share with the print queue residing on the server. Question is which way is preferred? I would assume that printing directly to a network printer rather than going through the server would be the most efficient from the point of view of network traffic. On the other hand I guess a server printer share would be easier to manage with the correct driver automatically being downloaded to the workstations. Also what about using GPP (Server2012) to install this printer on the workstations, does that require any specific way?

    Read the article

  • Is there a way to create a consistent snapshot/SnapMirror across multiple volumes?

    - by Tomer Gabel
    We use a NetApp FAS 6-series filer with an application that spans multiple volumes. For backup purposes I would like to create a consistent snapshot that spans these volumes at the same point in time (or at least with an extremely low delta); additionally, we'd like to to use SnapMirror to replace the production environment to test volumes. The problem is in creating a consistent snapshot/SnapMirror, since these commands are not transactional and do not take multiple parameters. I tried scripting consecutive "snap create" or "snapmirror resync" commands via SSH, but there's always a 0.5-2 second difference between each snapshot. It's currently "good enough", but I'm seriously concerned about the consistency impact with increased load (we're currently in pre-production). Has anyone managed to create a consistent snapshot that spans several volumes? How did you pull it off?

    Read the article

  • Gödel, Escher, Bach - Gödel's string

    - by Brad Urani
    In the book Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter, the author gives us a representation of the precursor to Gödel's string (Gödel's string's uncle) as: ~Ea,a': (I don't have the book in front of me but I think that's right). All that remains to get Gödel's string is to plug the Gödel number for this string into the free variable a''. What I don't understand is how to get the Gödel number for the functions PROOF-PAIR and ARITHMOQUINE. I understand how to write these functions in a programming language like FlooP (from the book) and could even write them myself in C# or Java, but the scheme that Hofstadter defines for Gödel numbering only applies to TNT (which is just his own syntax for natural number theory) and I don't see any way to write a procedure in TNT since it doesn't have any loops, variable assignments etc. Am I missing the point? Perhaps Gödel's string is not something that can actually be printed, but rather a theoretical string that need not actually be defined? I thought it would be neat to write a computer program that actually prints Gödel's string, or Gödel's string encoded by Gödel numbering (yes, I realize it would have a gazillion digits) but it seems like doing so requires some kind of procedural language and a Gödel numbering system for that procedural language that isn't included in the book. Of course once you had that, you could write a program that plugs random numbers into variable "a" and run procedure PROOF-PAIR on it to test for theoromhood of Gödel's string. If you let it run for a trillion years you might find a derivation that proves Gödel's string.

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • Ubuntu 12.04 VPN default route issue when using the UI

    - by Pieter Pabst
    When setting up a VPN connection using the UI (System settings = Network) the VPN connection is used as the default route after connecting to it. How can I avoid this? That is without assigning manual addresses. I want the connection to use DHCP. It should use the eth* interfaces for my default route, like they are used before making the connection. The VPN route should only be used when connecting to addresses in the ppp0 adapters range. If possible using the UI only, not that I'm not used to editing config files... But the point of using Ubuntu for me was the "it just works" concept (which is true for 99% of the time, keep up the good work). I tried setting the "Method" combobox in the "Configure..." dialogue to "Automatic (VPN) addresses only". That doesn't make a difference. (Another thing I noticed; the UI shows my connection as a "Wireless connection" in the list control on the left. Don't know why... it has the right icon, but that's probably something for the bug tracker)

    Read the article

  • Illumination and Shading for computer graphics class

    - by Sam I Am
    I am preparing for my test tomorrow and this is one of the practice questions. I solved it partially but I am confused with the rest. Here is the problem: Consider a gray world with no ambient and specular lighting ( only diffuse lighting). The screen coordinates of a triangle P1,P2,P3, are P1=(100,100), P2= (300,150), P3 = (200, 200). The gray values at P!,P2,P3 are 1/2, 3/4, and 1/4 respectively. The light is at infinity and its direction and gray color are (1,1,1) and 1.0 respectively. The coefficients of diffused reflection is 1/2. The normals of P1,P2,P3 are N1= (0,0,1), N2 = (1,0,0), and N3 = (0,1,0) respectively. Consider the coordinates of three points P1,P2,P3 to be 0. Do not normalize the normals. I have computed that the illumination at the 3 vertices P1,P2,P3 is (1/4,3/8,1/8). Also I computed that interpolation coefficients of a point P inside the triangle whose coordinates are (220, 160) are given by (1/5,2/5,2/5). Now I have 4 more questions regarding this problem. 1) The illumination at P using Gouraud Shading is: i) 1/2 The answer is 1/2, but I have no idea how to compute it.. 2) The interpolated normal at P is given by i) (2/5, 2/5,1/5) ii) (1/2, 1/4, 1/4) iii) (3/5, 1/5, 1/5) 3) The interpolated color at P is given by: i) 1/2 Again, I know the correct answer but no idea how to solve it 4) The illumination at P using Phong Shading is i) 1/4 ii) 9/40 iii) 1/2

    Read the article

  • Media Player is missing from the list in "Turn Windows features on or off"

    - by arsaKasra
    I decided to reinstall Media Player in Vista [this way], so I figured I should turn it off as a Windows feature. But when I continue with the procedure, I get an incomplete list of features, here's an image: I looked around a bit, wondering if that is only an option available in 7, and I have seen people saying different things. So, is this only available for 7 or is there something wrong going on here? Can I make Media Player to show up in here? I am on a Toshiba Satellite A100 with Vista Home Premium OEM. I don't have my recovery disks or any restore point. Just to mention, I currently do have Media Player and it's working fine. Sorry if I can't think of any more details to add, please ask me for anything I should have included.

    Read the article

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • What's a nice explanation for pointers?

    - by Macneil
    In your own studies (on your own, or for a class) did you have an "ah ha" moment when you finally, really understood pointers? Do you have an explanation you use for beginner programmers that seems particularly effective? For example, when beginners first encounter pointers in C, they might just add &s and *s until it compiles (as I myself once did). Maybe it was a picture, or a really well motivated example, that made pointers "click" for you or your student. What was it, and what did you try before that didn't seem to work? Were any topics prerequisites (e.g. structs, or arrays)? In other words, what was necessary to understand the meaning of &s and *, when you could use them with confidence? Learning the syntax and terminology or the use cases isn't enough, at some point the idea needs to be internalized. Update: I really like the answers so far; please keep them coming. There are a lot of great perspectives here, but I think many are good explanations/slogans for ourselves after we've internalized the concept. I'm looking for the detailed contexts and circumstances when it dawned on you. For example: I only somewhat understood pointers syntactically in C. I heard two of my friends explaining pointers to another friend, who asked why a struct was passed with a pointer. The first friend talked about how it needed to be referenced and modified, but it was just a short comment from the other friend where it hit me: "It's also more efficient." Passing 4 bytes instead of 16 bytes was the final conceptual shift I needed.

    Read the article

  • How to Manage Technical Employees

    - by Ajarn Mark Caldwell
    In my current position as Software Engineering Manager I have been through a lot of ups and downs with staffing, ranging from laying-off everyone who was on my team as we went through the great economic downturn in 2007-2008, to numerous rounds of interviewing and hiring contractors, full-time employees, and converting some contractors to employee status.  I have not yet blogged much about my experiences, but I plan to do that more in the next few months.  But before I do that, let me point you to a great article that somebody else wrote on The Unspoken Truth About Managing Geeks that really hits the target.  If you are a non-technical person who manages technical employees, you definitely have to read that article.  And if you are a technical person who has been promoted into management, this article can really help you do your job and communicate up the line of command about your team.  When you move into management with all the new and different demands put on you, it is easy to forget how things work in the tech subculture, and to lose touch with your team.  This article will help you remember what’s going on behind the scenes and perhaps explain why people who used to get along great no longer are, or why things seem to have changed since your promotion. I have to give credit to Andy Leonard (blog | twitter) for helping me find that article.  I have been reading his series of ramble-rants on managing tech teams, and the above article is linked in the first rant in the series, entitled Goodwill, Negative and Positive.  I have read a handful of his entries in this series and so far I pretty much agree with everything he has said, so of course I would encourage you to read through that series, too.

    Read the article

  • How to setup certificate authentication for MS SQL server 2008 R2 ?

    - by Stephane
    Hello, I have to connect an (ADO) application running on a standalone Windows 2003 R2 server to a SQL 2008 R2 database that is a member of the domain. I have setup an SQL authentication account for this and hard-coded the password into the connection string but I wonder if it wouldn't be possible to use certificate-based authentication for this instead. I haven't been able to find any documentation regarding this apparently new functionality of SQL 2008 R2 anywhere. Could someone kindly point me at some good documentation ? Or at least a description of the functionality and whether it could be used in my case or not ? Thank you in advance

    Read the article

  • SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

    - by pinaldave
    No! It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue. It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting. Here is how you can do it. Select the corner between A and 1 and Right Click on it. It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells… It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text. Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS. Solved! Any other trick you do you know to preserve the decimal values? Leave a comment please. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Excel

    Read the article

  • Manually accessing GMail via IMAP

    - by Jeff Mc
    I'm trying to connect to gmail imap, but I am unable to execute any commands after login. I'm running openssl s_client -connect imap.gmail.com:993 to connect then, * OK Gimap ready for requests from 128.146.221.118 42if6514983iwn.40 . CAPABILITY * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA XLIST CHILDREN XYZZY SASL-IR AUTH=XOAUTH . OK Thats all she wrote! 42if6514983iwn.40 . LOGIN {email removed} {password removed} * CAPABILITY IMAP4rev1 UNSELECT LITERAL+ IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE . OK {email removed} authenticated (Success) . CAPABILITY at which point it simply hangs with the connection open. I'm guessing gmail pushes you off to a node in a cluster after it authenticate me?

    Read the article

  • Funky mail sorting and grouping in Outlook 2007

    - by laurie
    In outlook 2007 I group mails in a folder by subject with mails in each subject group sorted by Received date (newest to oldest) This works fine; I tick 'Subject' and 'Show in groups' in the context menu of the folder's table header. Life is good. But the subject groups in the mail folder are sorted alphabetically. I would like the group which has the newest mail to be the first group. Similar to how the arrange by 'Conversation' works Can this be done? I'm not averse to an add-in/macro type solution if anyone can point me at examples of implementing custom sorting in Outlook

    Read the article

  • How to handle wildly varying rendering hardware / getting baseline

    - by edA-qa mort-ora-y
    I've recently started with mobile programming (cross-platform, also with desktop) and am encountering wildly differing hardware performance, in particular with OpenGL and the GPU. I know I'll basically have to adjust my rendering code but I'm uncertain of how to detect performance and what reasonable default settings are. I notice that certain shader functions are basically free in a desktop implemenation but can be unusable in a mobile device. The problem is I have no way of knowing what features will cause what performance issues on all the devices. So my first issue is that even if I allow configuring options I'm uncertain of which options I have to make configurable. I'm wondering also wheher one just writes one very configurable pipeline, or whether I should have 2 distinct options (high/low). I'm also unsure of where to set the default. If I set to the poorest performer the graphics will be so minimal that any user with a modern device would dismiss the game. If I set them even at some moderate point, the low end devices will basically become a slide-show. I was thinking perhaps that I just run some benchmarks when the user first installs and randomly guess what works, but I've not see a game do this before.

    Read the article

  • setting up delegate or smtp forwarding

    - by cotiso
    for work we have a remote dedicated server to run our webservice that also runs our email services, at home(comcast residential internet) i cannot send mail using the dedicated server's SMTP, comcast spits back a error saying i can only use their SMTP server for sending mail at work(comcast business internet) we can use our dedicated server for sending mail with no problem so i set up a box at work to forward smtp traffic, i'm new to all this networking stuff by the way i used delegate to forward smtp traffic, can someone point me in the right direction on how to use this program(delegate) to fix our issue the delegate command i used to test is : delegated -P25 SERVER="smtp://dedicated.server.com:25" PERMIT=":::" -v i also opened up port 25 on the router so it points to my boxes ip are there any other ways to fool comcast into thinking im using my works ip to send mail, my coworkers and i are unable to send mail from home for some time now thanks

    Read the article

  • Step by step USB Wi-Fi driver install for mint 15

    - by Andy
    I'm running Linux mint 15 32 bit on a Dell inspiron 2200. I have given up trying to get the Broadcom BCM4318 built in wireless working, so bought a Dynamode Nano 802.11n wireless USB adapter. It came with an install disc. I have no idea how to install it. Could someone please point me in the right direction for full, idiot proof, step by step instructions. I've been using Ubuntu 12.04 but am totally new to mint 15, so best to assume no previous knowledge. Thanks in advance.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >