Search Results

Search found 1653 results on 67 pages for 'scott a lawrence'.

Page 5/67 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is there a Unix/Linux platform equivalent of Telligent Community (formerly Community Server)?

    - by Scott A. Lawrence
    Telligent Community combines blogs, wikis, forums, and file-sharing capabilities into a single product with single sign-on, using all Microsoft technologies. Is there an equivalent offering that runs on Unix/Linux? Or would I have to pick and choose individual product offerings and figure out another option for single sign-on across them? Are there plug-ins for something like WordPress or MovableType that might add the necessary functionality? A friend of mine is looking to add a "members-only" area to her company's website, and since they're hosted on Dreamhost (and can't afford StackExchange pricing yet), I'm trying to find other options for them.

    Read the article

  • Spreadsheet formula: lowest 100 values in a range

    - by Justin Lawrence
    Is there any way I could sum up the lowest 100 values within a range? I know that min() would give you the lowest value but i need something to return the 100 lowest values. I just used 100 hypothetically to make it easier to understand what I'm trying to achieve. I can use any of the following spreadsheet apps: Openoffice.org, Excel or Google Spreadsheets -- whichever works. Thanks a lot!!!

    Read the article

  • Using a Blackberry Bold as an IP Modem on Windows 7

    - by Lawrence
    I recently installed Windows 7 and now cannot use my Blackberry Bold as a modem (via a USB cable): When I query the modem it is successful. I have added the correct "at" commands. When I try connect it says "connecting modems" but then it times out with the following error: Error 638: The remote server is not responding in a timely fashion. I also have the latest desktop manager software installed.

    Read the article

  • Can Subject Alternative Name accommodate multiple virtual mail domains?

    - by Lawrence
    I am currently running a postfix server with self signed certificates serving one mail domain, mycompany.com, the mail server is mail.mycompany.com and so is the CN of the certificate. Now, I need to add a new domain to it. The new domain name is mycompany.net to the same server. Since the users already have the root of the old certificate, I'd like to reuse that. However, I'd like to issue a new certificate so users using the SMTP from Outlook/Thunderbird of mail.mycompany.net do not get warnings. If I understand correctly, if I issue a new certificate with CN=mail.mycompany.com and a subjectAltName=DNS:mail.mydomain.net and have postfix serve this, the client will not complain either way about the cn not matching the target host name. Am I correct in this assumption or am I misunderstanding the concept of Subject Alternative Name? Just to avoid conversation, I do not want to have users on mycompany.net addresses use the mycompany.com server because I might (not a technical issue) have to split up into two different locations, and I want to produce an easily migrateable setup.

    Read the article

  • When does /tmp get cleared?

    - by John Lawrence Aspden
    I'm taking to putting various files in /tmp, and I wondered what the rules on deleting them are? I'm imagining it's different for different distributions, and I'm particularly interested in Ubuntu and Fedora desktop versions. But a nice general way of finding out would be a great thing. Even better would be a nice general way of controlling it! (Something like 'every day at 3 in the morning, delete any /tmp files older than 60 days, but don't clear the directory on reboot')

    Read the article

  • Cannot open SQL 2005 database in SQL server Management Studio 2008 R2 on Windows 7

    - by Darryl Lawrence
    I have Windows 7 64bit as my OS + SQL Server 2008 R2 installed. I can connect to SQL Databases (2008), but cannot connect to a SQL 2005 database. I can, however, connect to the 2005 SQL Database on a PC that has Windows XP as the OS and also has SQL Server 2008 R2 installed. So it seems that it works fine on XP but not on Windows 7 (32 or 64bit). is this an Operating System issue? Error message: Cannot connect to OMRSQLV016\PRODSQL002. =================================== A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 Error Number: -1 Severity: 20 State: 0

    Read the article

  • Windows Server vs Sharing

    - by Mark Lawrence
    Not sure if this is the right place for this question. I have a friend that owns a small business running 3 local machines, that are connected to the internet, and a server that each local machine connects to. He has recently bought a newer server, and by server I mean a Windows Vista box. He wants to use this purely to store data that the 3 local machines can access, kind of like a glorified external hard drive. Im suggesting to fore-go the server option and simply setup sharing on the 'server' box. Interested in hearing any suggestions as to whether the server idea is preferable?

    Read the article

  • Nginx rewrite rules, some work, some don't

    - by Lawrence Goldstien
    Here are the two rewrite rules: This one works rewrite ^/knowledgebase/([0-9]+)/[a-z0-9_-]+.html$ /./knowledgebase.php?action=displayarticle&id=$1 last; This one doesn't rewrite ^/announcements/([0-9]+)/[a-z0-9_-]+.html$ /./announcements.php?id=$1 last; There is no difference between the two as far as I can see. The url to be rewritten for announcements is: /announcements/2/New-Site-Design.html And should be rewritten to: /announcements.php?id=2 I really can't see how the announcements one doesn't work compared to the knowledgebase one. Any tips would be greatly appreciated.

    Read the article

  • HTTPS subdomain does not load site under HTTP

    - by Mark Lawrence
    I recently installed an SSL certificate on a subdomain following the steps at cPanel. Lets just say the domain is example.com and the subdomain is sub.example.com. I updated the userdata file for the subdomain and changed the IP address to the IP I wanted to use I updated the example.com zone file and changed the IP for the A Name for the subdomain to the IP I wanted to use Using domain tools I checked that sub.example.com resolved to the new IP which it does. I then installed an SSL certificate on example.com and then on sub.example.com When I visit http://sub.example.com I get the default Apache account screen, and when I visit https://sub.example.com I get the cPanel 404 page. If however I enter https://sub.example.com/admin (the location of my admin section) the page loads and I can login. I thought that this might be a propagation issue however as the subdomain resolves to the IP and I can reach the admin page I suspect it is not a propagation issue and possibly an incorrect zone file. Any thoughts?

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    Hello, I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    Hi. I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • nginx proxy_pass POST 404 errors

    - by Scott
    I have nginx proxying to an app server, with the following configuration: location /app/ { # send to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } Any request for /app goes to :9001, whereas the default site is hosted on :9000. GET requests work fine. But whenever I submit a POST request to /app/any/post/url it results in a 404 error. Hitting the url directly in the browser via GET /app/any/post/url hits the app server as expected. I found online other people with similar problems and added proxy_set_header Host $http_host; but this hasn't resolved my issue. Any insights are appreciated. Thanks. Full config below: server { listen 9000; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /home/scott/src/ph-dox/html; # root ../html; TODO: how to do relative paths? index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /app/ { # rewrite here sends to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } }

    Read the article

  • Microsoft and the open source community

    - by Charles Young
    For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed. Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism. a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source. b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying. c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so.  One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools. All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries. Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge. Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • Working with an external button box

    - by Scott
    I tried this question on Stack Overflow, but I was pointed here, so here goes: For a new project for myself, I am looking for a way to be able to (for example) open a pop-up window on my laptop, by pressing a button on an external device (to be build by myself, or at least bought) connected with USB. Basically I would be looking at something like a Arduino or Raspberry (IF I am looking in the right direction) with buttons on it, and as soon as I hit a button on the external box with physical buttons, a command activates on my laptop and for example opens a popup window in which I can input tekst. Does anyone know: 1) if it is possible to do this at all. 2) What equipment is needed for the external box, what programming is needed. I preffer .net (dot net) but maybe it can only be done with software from the external box. If anyone can point me in the right direction, like make/model of the external box or websites I would be very happy. I have knowledge of Visual Studio/.net but I am willing to learn other languages if .net is not an option for this project. Thanks in advance Scott PS: If anyone knows of some better tags, or at least knows what I mean and needs me to edit the question, please do tell me... I am new on Stack Overflow/Superuser.

    Read the article

  • Changing the default UITabBarController background color.

    - by Scott
    Hello, So I have an iPhone application running that is controlled at the highest level by a UITabBarController. It is the default black Tab Bar at the bottom that you see in many iPhone apps. I am kind of new to iPhone SDK programming, and I know I have seen other apps that have their own background color for the Tab Bar at the bottom. I am not sure if they are using this tab bar as I am, as the main controller for their app, but the question applies to this: How do I change the background color of the main UITabBarController in my application? I wanted to change it to a dark shade of green similar to the colors of the navigation bars and labels I have placed in my app. I find it weird how Apple makes it really easy to change the color of Navigation Bars (not controllers), and other things, but when it comes to controllers (in this case a Tab Bar Controller), I cannot find a single way to implement this cleanly and efficiently. Thanks! -Scott

    Read the article

  • WinDbg .for loop

    - by Scott
    I am having trouble getting the WinDbg .for command to work. I would like to dump an array of c++ structs. ?? gpTranData->mpApplCodes[0] works for a single entry but I would like to loop through n of these. .for ($t0=0;$t0<(gpTranData->miApplCodeCount);$t0++){ ?? &gpTranData->mpApplCodes[$t0] } sound logical to me but I get Numeric expression missing from '>miApplCodeCount);$t0++){ ?? &gpTranData->m_pApplCodes[$t0] }' Any ideas? Scott

    Read the article

  • How would you validate a checkbox in ASP.Net MVC 2?

    - by Scott Mayfield
    Using MVC2, I have a simple ViewModel that contains a bool field that is rendered on the view as a checkbox. I would like to validate that the user checked the box. The [Required] attribute on my ViewModel doesn't seem to do the trick. I believe this is because the unchecked checkbox form field is not actually transmitted back during the POST, and therefore the validation doesn't run on it. Is there a standard way to handle checkbox "required" validation in MVC2? or do I have to write a custom validator for it? I suspect the custom validator won't get executed either for the reason mentioned above. Am I stuck checking for it explicitly in my controller? That seems messy... Any guidance would be appreciated. Scott

    Read the article

  • How does Entity Framework 4.0 determine which parameters are required for the factory method of an e

    - by Scott Davies
    Hi, I am working with Entity Framework 4.0 (VS 2010 Beta 2, NOT RC). I can model the EDM and produce the required database. When I ask VS to generate the code for the model, it generates the expected .designer.cs file. When I look at the factory methods for each entity that the designer has generated, I've noticed that it doesn't include all of the properties of the entity. Is it correct to say that the factory method only includes properties that cannot be null ? This appears to be the case, but I'm not entirely sure. Thanks, Scott

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >