Search Results

Search found 22173 results on 887 pages for 'concerned client'.

Page 467/887 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Video monetising and simple shop, platform?

    - by fieldman
    I have this client that wants a single-product website. The product is a training-video that they want to deliver virtually and optionally physically. I usually do all the front-end design and back-end development but the budget is close to $0 to start with. So I'm looking for a platform like shopify or something where a shop/cart can be set up quickly and simply with minimal up-front cost - but which can accomodate some kind of paywall (DRM too?) for the online video with an option to purchase for an aditional cost the physical DVD. Am I approaching the wrong way all together? Or do you know of any platform that will accomodate the specs?

    Read the article

  • I'm getting unrelated system messages in terminal?

    - by Zed
    For some reason from time to time I keep getting this weird system messages in my working terminal emulator, unrelated to anything I do.For example: [000:000] Browser XEmbed support present: 1 [000:000] Browser toolkit is Gtk2. [000:001] Using Gtk2 toolkit [000:033] Starting client channel. [000:048] Read port file, port=33359 [000:050] Initiated connection to GoogleTalkPlugin [000:154] Socket connection established [000:154] ScheduleOnlineCheck: Online check in 5000ms [000:203] Got cookie response, socket is authorized [000:203] AUTHORIZED; socket handshake complete [005:216] HandleOnlineCheck: Starting check [005:216] HandleOnlineCheck: OK; current state: 3 Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory After some investigation I concluded that those messages ARE from firefox.However, I didn't start Firefox from terminal. or nsBuiltinDecoderStateMachine::RunStateMachine queuing nsBuiltinDecoder::PlaybackEnded nsBuiltinDecoder::PlaybackEnded mPlayState=3 nsBuiltinDecoderStateMachine::RunStateMachine queuing nsBuiltinDecoder::PlaybackEnded nsBuiltinDecoder::PlaybackEnded mPlayState=3 I have no clue how this ends up in working terminal, any thoughts ?

    Read the article

  • ArchBeat Link-o-Rama Top 10 for June 23 - July 1 2012

    - by Bob Rhubart
    The top 10 most popular items as shared via my social networks for the week of June 23 - July 1 2012. Software Architecture for High Availability in the Cloud | Brian Jimerson How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Podcast: Public, Private, and Hybrid Clouds | OTN ArchBeat Podcast Read the latest news on the global user group community - June 2012 | IOUC Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget ArchBeat Link-o-Rama Top 20 for June 17-23, 2012 Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koser Progress 4GL and DB to Oracle and cloud | Tom Laszewski BPM – Disable DBMS job to refresh B2B Materialized View | Mark Nelson Thought for the Day "On Monday, when the sun is hot I wonder to myself a lot: 'Now is it true, or is it not, That what is which and which is what?'" — A. A. Hodge (July 18, 1823 – November 12, 1886) Source: ThinkExist.com

    Read the article

  • TFS 2012 Upgrade and SQL Server - SharePoint - OS Requirements.

    - by Vishal
    Hello folks,Recently I was involved in Installation and Configuration of Team Foundation Server 2010 Farm for a client. A month after the installation and configuration was done and everything was working as it was supposed to, Microsoft released Team Foundation Server 2012 in mid August 2012. Well the company was using Borland Starteam as their source control and once starting to use TFS 2010, their developers and project managers were loving it since TFS is not just a source control tool and way much better then StarTeam. Anyways, long story short, they are now interested in thinking of upgrading to the newest version. Below are some basic Hardware and Software requirements for TFS 2012:Operating System:Windows Server 2008 with SP2 (only 64bit)Windows Server 2008 R2 with SP1 (only 64bit)Windows Server 2012 (only 64bit)SQL Server:SQL Server 2008 R2 and SQL Server 2012SQL Server 2008 is no longer supported.SQL Server Requirements for TFS.SharePoint Products:SharePoint Server 2010. (SharePoint Foundation 2010, Standard, Enterprise).MOSS 2007 (Standard, Enterprise)Windows SharePoint Services 3.0 (WSS 3.0)SharePoint Products Requirements for TFS.Project Server:Project Server 2010 with SP1.Project Server 2007 with SP2.Project Server Requirements for TFS.More information onf TFS Upgrade Requirements can be found here. Hardware Recommendations can be found here.Thanks,Vishal Mody

    Read the article

  • Building a plug-in for Windows Live Writer

    - by mbcrump
    This tutorial will show you how to build a plug-in for Windows Live Writer. Windows Live Writer is a blogging tool that Microsoft provides for free. It includes an open API for .NET developers to create custom plug-ins. In this tutorial, I will show you how easy it is to build one. Open VS2008 or VS2010 and create a new project. Set the target framework to 2.0, Application Type to Class Library and give it a name. In this tutorial, we are going to create a plug-in that generates a twitter message with your blog post name and a TinyUrl link to the blog post.  It will do all of this automatically after you publish your post. Once, we have a new projected created. We need to setup the references. Add a reference to the WindowsLive.Writer.Api.dll located in the C:\Program Files (x86)\Windows Live\Writer\ folder, if you are using X64 version of Windows. You will also need to add a reference to System.Windows.Forms System.Web from the .NET tab as well. Once that is complete, add your “using” statements so that it looks like whats shown below: Live Writer Plug-In "Using" using System; using System.Collections.Generic; using System.Text; using WindowsLive.Writer.Api; using System.Web; Now, we are going to setup some build events to make it easier to test our custom class. Go into the Properties of your project and select Build Events, click edit the Post-build and copy/paste the following line: XCOPY /D /Y /R "$(TargetPath)" "C:\Program Files (x86)\Windows Live\Writer\Plugins\" Your screen should look like the one pictured below: Next, we are going to launch an external program on debug. Click the debug tab and enter C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriter.exe Your screen should look like the one pictured below:   Now we have a blank project and we need to add some code. We start with adding the attributes for the Live Writer Plugin. Before we get started creating the Attributes, we need to create a GUID. This GUID will uniquely identity our plug-in. So, to create a GUID follow the steps in VS2008/2010. Click Tools from the VS Menu ->Create GUID It will generate a GUID like the one listed below: GUID <Guid("56ED8A2C-F216-420D-91A1-F7541495DBDA")> We only want what’s inside the quotes, so your final product should be: "56ED8A2C-F216-420D-91A1-F7541495DBDA". Go ahead and paste this snipped into your class just above the public class. Live Writer Plug-In Attributes [WriterPlugin("56ED8A2C-F216-420D-91A1-F7541495DBDA",    "Generate Twitter Message",    Description = "After your new post has been published, this plug-in will attempt to generate a Twitter status messsage with the Title and TinyUrl link.",    HasEditableOptions = false,    Name = "Generate Twitter Message",    PublisherUrl = "http://michaelcrump.net")] [InsertableContentSource("Generate Twitter Message")] So far, it should look like the following: Next, we need to implement the PublishNotifcationHook class and override the OnPostPublish. I’m not going to dive into what the code is doing as you should be able to follow pretty easily. The code below is the entire code used in the project. PublishNotificationHook public class Class1 :  PublishNotificationHook  {      public override void OnPostPublish(System.Windows.Forms.IWin32Window dialogOwner, IProperties properties, IPublishingContext publishingContext, bool publish)      {          if (!publish) return;          if (string.IsNullOrEmpty(publishingContext.PostInfo.Permalink))          {              PluginDiagnostics.LogError("Live Tweet didn't execute, due to blank permalink");          }          else          {                var strBlogName = HttpUtility.UrlEncode("#blogged : " + publishingContext.PostInfo.Title);  //Blog Post Title              var strUrlFinal = getTinyUrl(publishingContext.PostInfo.Permalink); //Blog Permalink URL Converted to TinyURL              System.Diagnostics.Process.Start("http://twitter.com/home?status=" + strBlogName + strUrlFinal);            }      } We are going to go ahead and create a method to create the short url (tinyurl). TinyURL Helper Method private static string getTinyUrl(string url) {     var cmpUrl = System.Globalization.CultureInfo.InvariantCulture.CompareInfo;     if (!cmpUrl.IsPrefix(url, "http://tinyurl.com"))     {         var address = "http://tinyurl.com/api-create.php?url=" + url;         var client = new System.Net.WebClient();         return (client.DownloadString(address));     }     return (url); } Go ahead and build your project, it should have copied the .DLL into the Windows Live Writer Plugin Directory. If it did not, then you will want to check your configuration. Once that is complete, open Windows Live Writer and select Tools-> Options-> Plug-ins and enable your plug-in that you just created. Your screen should look like the one pictured below: Go ahead and click OK and publish your blog post. You should get a pop-up with the following: Hit OK and It should open a Twitter and either ask for a login or fill in your status as shown below:   That should do it, you can do so many other things with the API. I suggest that if you want to build something really useful consult the MSDN pages. This plug-in that I created was perfect for what I needed and I hope someone finds it useful.

    Read the article

  • Le service de "Cloud computing" de jeux vidéo OnLive annonce ses dates et son prix à la Game Develop

    Mise à jour du 11/03/10 Le service de "Cloud computing" de jeux vidéo OnLive annonce ses dates et son prix à la Game Developers Conference 2010 Se déroulant actuellement, la Game Developers Conference 2010 a offert l'occasion à Mike McGarvey, responsable du projet OnLive, d'officialiser certains points sur son projet de "could computing" pour les jeux vidéo. On apprend ainsi que le service sera disponible à partir du 17 juin prochain sur la sol américain. Rien n'est précisé quand à sa disponibilité du service en Europe. Le service demandera au client de s'abonner mensuellement pour un prix de 14,95$ (soit environ 11€ par mois). Le service sera dans un p...

    Read the article

  • JavaOne San Francisco 2013 Content Catalog Live!

    - by Yolande Poirier
    There will be over 500 technical sessions, BOFs, tutorials, and hands-on labs offered. Note that "Securing Java" is a new track this year. The tracks are:  Client and Embedded Development with JavaFX Core Java Platform Edge Computing with Java in Embedded, Smart Card, and IoT Applications Emerging Languages on the Java Virtual Machine Securing Java Java Development Tools and Techniques Java EE Web Profile and Platform Technologies Java Web Services and the Cloud In the Content Catalog you can search on tracks, session types, session categories, keywords, and tags. Or, you can search for your favorite speakers to see what they’re presenting this year. And, directly from the catalog, you can share sessions you’re interested in with friends and colleagues through a broad array of social media channels. Start checking out JavaOne content now to plan your week at the conference. Then, you’ll be ready to sign up for all of your sessions when the scheduling tool goes live.

    Read the article

  • View/ViewModel Interaction - Bindings, Commands and Triggers

    It looks like I have a set of posts on ViewModel, aka MVVM, that have organically emerged into a series or story of sorts. Recently, I blogged about The Case for ViewModel, and another on View/ViewModel Association using Convention and Configuration, and a long while back now, I posted an Introduction to the ViewModel Pattern as I was myself picking up this pattern, which has since become the natural way for me to program client applications. This installment adds to this on-going series. I've...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • View/ViewModel Interaction - Bindings, Commands and Triggers

    It looks like I have a set of posts on ViewModel, aka MVVM, that have organically emerged into a series or story of sorts. Recently, I blogged about The Case for ViewModel, and another on View/ViewModel Association using Convention and Configuration, and a long while back now, I posted an Introduction to the ViewModel Pattern as I was myself picking up this pattern, which has since become the natural way for me to program client applications. This installment adds to this on-going series. I've...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to move MOSS 2007 to another SharePoint Farm

    - by DipeshBhanani
    It was time of my first onsite client assignment on SharePoint. Client had one server production environment. They wanted to upgrade the topology with completely new SharePoint Farm of three servers. So, the task was to move whole MOSS 2007 stuff to the new server environment without impacting data. The last three words “… without impacting data…” were actually putting pressure on my head. Moreover SSP was required to move because additional information has been added for users apart from AD import.   I thought I had to do only backup and restore. It appeared pretty easy at first thought. Just because of these three damn scary words, I thought to check out on internet for guidance related to this scenario. I couldn’t get anything except general guidance of moving server on Microsoft TechNet site. I promised myself for starting blogs with this post if I would be successful in this task. Well, I took long time to write this but finally made it. I hope it will be useful to all guys looking for SharePoint server movement.   Before beginning restoration, make sure that, there is no difference in versions of SharePoint at source and destination server. Also check whether the state of SharePoint Installation at the time of backup and restore is same or not. (E.g. SharePoint related service packs and patches if any)   The main tasks of the server movement are as follow:   1.        Backup all the databases 2.        Install and configure SharePoint on new environment 3.        Deploy all solutions (WSP Files) globally to destination server- for installing features attached to the solutions 4.        Install all the custom features 5.        Deploy/Copy custom pages/files which are added to the “12Hive” folder later 6.        Restore SSP 7.        Restore My Site 8.        Restore other web application   Tasks 3 to 5 are for making sure that we have configured the environment well enough for the web application to be restored successfully. The main and complex task was restoring SSP. I have started restoring SSP through Central Admin. After a while, the restoration status was updated to “unsuccessful”. “Damn it, what went wrong?” I thought looking at the error detail down the page. I couldn’t remember the error message but I had corrected and restored it again.   Actually once you fail restoring SSP, until and unless you don’t clean all related stuff well, your restoration will be failed again and again. I wanted to find the actual reason. So cleaned, restored, cleaned, restored… I had tried almost 5-6 times and finally, I succeeded. I had realized how pleasant it is, to see the word “Successful” on the screen. Without wasting your much time to read, let me write all the detailed steps of restoring SSP:   1.        Delete the SSP through following STSADM command. stsadm -o deletessp -title <SSP name> -deletedatabases -force e.g.: stsadm -o deletessp -title SharedServices1 -deletedatabases –force 2.        Check and delete the web application associated with SSP if it exists. 3.        Remove Link from Check and remove “Alternate Access Mapping” associated with SSP if it exists. 4.        Check and delete IIS site as well as application pool associated with SSP if it exists. 5.        Stop following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search   6.        Delete all the databases associated/related to SSP from SQL Server. 7.        Reset IIS. 8.        Start again following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search   9.        Restore the new SSP.   After the SSP restoration, all other stuffs had completed very smoothly without any more issues. I did few modifications to sites for change of server name and finally, the new environment was ready.

    Read the article

  • Interactive manifest editing with the Automated Installer Manifest Wizard

    - by Glynn Foster
    Oracle Solaris 11.2 adds a new Automated Installer (AI) Manifest Wizard to allow administrators to more easily create AI manifests for use in provisioning new client systems in the data center. The AI Manifest Wizard is a web web based interface that steps administrators through the basics of the AI manifest - target disks and layout selection, additional ZFS pools and datasets, IPS publisher and package selection, and the creation of any Oracle Solaris Zone virtual environments. The end result is an AI manifest without having to directly edit XML, and this can then be associated with an appropriate AI service. To get started, check out How To Create an Automated Installer Manifest with an Interactive Wizard

    Read the article

  • Avoid GPL violation by moving library out of process

    - by Andrey
    Assume there is a library that is licensed under GPL. I want to use it is closed source project. I do following: Create small wrapper application around that GPL library that listens to socket, parse messages and call GPL library. Then returns results back. Release it's sources (to comply with GPL) Create client for this wrapper in my main application and don't release sources. I know that this adds huge overhead compared to static/dynamic linking, but I am interested in theoretical way.

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • Email links open in a new window [closed]

    - by Dan
    I'm asking this as an opinion question. How does everyone treat email links opening in a new window if their default email client is web based? This way? <a href="mailto:[email protected]">email me</a>. It will open fine for app based email clients but open in the same window for web based clients. This way? <a href="mailto:[email protected]" target="_blank">email me</a>. It will open in a new tab for web based email clients but open a blank tab. I cant really seem to find the best of both worlds. What does everyone else do?

    Read the article

  • How do I restore the privacy pane of the system settings?

    - by Sparhawk
    Checking out screenshots of the system settings in Ubuntu 12.10, it seems that I am missing a few. When I open up my settings, I cannot see Privacy, Backup, and Management Service. Also, nothing comes up when I search the Dash for these words. In a previous edition of Ubuntu, I purged Ubuntu One (with sudo apt-get purge ubuntuone-client python-ubuntuone-storage* ubuntuone-couch ubuntuone-installer) and appropriately, I cannot see the Ubuntu One icon. I've also previously purged unity-lens-music Perhaps I purged some metapackage that removed the others? In any case, how do I restore the privacy pane (as well as the other icons)? Also, any suggestions for what I did to remove the packages in the first place (and hence how to avoid this problem in the future)?

    Read the article

  • Uploading a file automatically for speed test?

    - by Abhi
    I am building a Web UI for a device for internet connection and one of the requirements in it is a speed test. I know the basic concept of how speed test works. A file is downloaded for a limited time then the same file is uploaded again and the speed is tracked at regular intervals. Downloading the file is not an issue, but how am I supposed to upload the file without the client knowing that the file is getting uploaded? I've read through a lot of documentation, but I'm still not able to get the answer to how I will upload the file from clients machine without asking him to select the file.

    Read the article

  • Is it good practice to analyse who introduced each bug?

    - by Michal Czardybon
    I used to analyse performance of programmers in my team by looking at the issues they have closed. Many of the issues are of course bugs. And here another important performance aspect comes - who introduced the bugs. I am wondering, if creating a custom field in the issue tracking system "Blamed" for reporting the person who generated the problem, is a good practice. One one hand it seems ok to me to promote personal responsibility for quality and this could reduce the additional work we have due to careless programming. On the other hand this is negative, things are sometimes vague and sometimes there is a reason such us "this thing had to be done very quickly due to a client's...". What to you think?

    Read the article

  • There is any reason for which a delete method/field/function refactoring doesn't exist?

    - by raisercostin
    An operation in an interface is obsolete so I decided to delete it. It seems that there is no automatic support for such a "refactoring". For me is a refactoring operation since the behavior of the code will be preserved since nobody(tests, client apis) will notice that the operation was removed. In eclipse, in java code, on an method in an interface I have the following options: rename, move, change method signature, inline, extract interface, extract superclass, use supertype when possible, pull up, push down, introduce parameter objet, introduce indirection, generate declared type. There is any reason for which a delete method/field/function refactoring doesn't exist?

    Read the article

  • Feasibility of Windows Server 2008 DFS replication over WAN link

    - by CesarGon
    We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link. Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders. My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth. I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side. My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

    Read the article

  • Do logged in users need to browse a site over https?

    - by Luke
    I've never thought it was necessary, but a client has requested that all webpages served to logged in users be delivered over HTTPS. Aside from the implementation standpoint, which I don't think I'm going to pursue is there any real reason for this request ? For clarity, the login / logout process, account settings, registration preferences and all user related scripts are served over https. but I can't see the point in my news articles, press releases, events etc... being served in this manner? Am I missing something ?

    Read the article

  • Extracting GPS Data from JPG files

    - by Peter W. DeBetta
    I have been very remiss in posting lately. Unfortunately, much of what I do now involves client work that I cannot post. Fortunately, someone asked me how he could get a formatted list (e.g. tab-delimited) of files with GPS data from those files. He also added the constraint that this could not be a new piece of software (company security) and had to be scriptable. I did some searching around, and found some techniques for extracting GPS data, but was unable to find a complete solution. So, I did...(read more)

    Read the article

  • How to get working cups command line tools on Server 14.04

    - by Nick
    It looks like some of the commands like lpr and lprm have broken versions that don't work with cups. These commands worked properly on 10.04. lpr for cups has an -o option, but no lpr is intalled when cups is installed, and the lpr installed with apt-get install lpr does not have the -o option and does not appear to be the cups version of lpr. man lpr shows BSD General Commands Manual at the top, where man lpr on the Ubuntu 10.04 server said Apple, inc in the same spot. which leads me to believe the "wrong" lpr is in the "lpr" package or package names got moved around. There is also a lprng package, but trying to install it wants to remove cups and cups-client. lprm also returns lprm: PrinterName: unknown printer when PrinterName is in fact a valid printer installed with cups and does appear in lpstat -t. How do I get the proper cups versions of lpr working on Server 14.04?

    Read the article

  • My First Dive into Ocean of SharePoint

    - by DipeshBhanani
    My First Dive into Ocean of SharePoint   Hello Guys, I am Dipesh Bhanani, An IT Consultant from an MNC. I have worked with many client as a SharePoint Consultant ever since. I have been on various successful engagements deploying Project Server 2003/2007, SharePoint 2003, MOSS 2007 and InfoPath 2007. People have asked me for years why I don’t start blogging. I have come across many technical hurdles in the ocean of SharePoint and resolved them passionately. So I thought why I should not share my knowledge to alleviate SharePoint troubles. Wish me luck on my ride in the world of SharePoint, please share the good and bad with me right here on my blog! More to come soon!

    Read the article

  • Is UPS worthwhile for home equipment?

    - by Jon Skeet
    Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) One external hard disk still claiming to function, but corrupting data One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually running during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.

    Read the article

  • Video conference/chat tool that can be embedded in own website needed

    - by Olaf
    We are looking for a means (a tool, a commercial service) to enable a closed user group to start a live video conference in a browser, as part of the company website. Something like Skype, but embedded and available for everybody that has access to the page into which the tool is embedded. Most services require registration and the creation of a chat room on their website, or, as Skype or similar solutions, the installation of an extra software. What we need is a solution with some kind of a "hidden login", performed by the site's client script (which knows who the user is and forwards the credentials to the service). Any suggestion?

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >