Search Results

Search found 13577 results on 544 pages for 'the great gonzo'.

Page 238/544 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • OEM to Virtual Machine for Diaster Recovery / Business Continuity.

    - by James
    Hello, We are trying to deploy a Business Continuity Appliance (Zenith BDR) for a customer and one of the features is the ability to bring up the failed server in a virtual machine on the appliance. Great feature. However, the customer has OEM version of Server 2003 on that server and it comes up requiring immediate re-activation since it is now on different hardware. We would be happy with a 2-3 day grace period which is what we expected, but this isn't happening. What are the solutions without having to purchase another VLK copy of Server 2008 and re-installing the server with that license just so we can set this thing up.

    Read the article

  • How to improve designer and developer work flow?

    - by mbdev
    I work in a small startup with two front end developers and one designer. Currently the process starts with the designer sending a png file with the whole page design and assets if needed. My task as front end developer is to convert it to a HTML/CSS page. My work flow currently looks like this: Lay out the distinct parts using html elements. Style each element very roughly (floats, minimal fonts and padding) so I can modify it using inspection. Using Chrome Developer Tools (inspect) add/change css attributes while updating the css file. Refresh the page after X amount of changes Use Pixel Perfect to refine the design more. Sit with the designer to make last adjustments. Inferring the paddings, margins, font sizes using trial and error takes a lot of time and I feel the process could become more efficient but not sure how to improve it. Using PSD files is not an option since buying Photoshop for each developer is currently not considered. Design guide is also not available since design is still evolving and new features are introduced. Ideas for improving the process above and sharing how the process looks like in your company will be great.

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • what is the Web development process efficiency judgment criteria

    - by Ahmed safan
    I'm working as a web developer and I want to be able to determine if I'm efficient. Does this include the how long it take to accomplish tasks such as: Server side code for the site logic with one language or multiple php,asp,asp.net. Client side code like javascript with jquery for ajax, menus and other interactivity Page layout, html, css (color, fonts (but I have no artistic sense!)) The needs of the site and how it will work (planning) How can i judge how long it will take to complete a website? The site has CMS for adding and editing news, products, articles on the experience of the company. Also, they can edit team work, add Recreational Activities and a logo gallery with compressed psd download, and send messages to cpanel and to email. You are starting from scratch except JQuery and PHPmailer. How can I estimate how long the job will take, and how can I calculate the required time to finish any new projects? I'm so sorry for many scattered questions, but I'm in my first experiment and I want to take benefits from the great experience of those who have it.

    Read the article

  • Virtually the fastest way to try Solaris 11 (and Solaris 10 zones)

    - by dminer
    If you're looking to try out Solaris 11, there are the standard ISO and USB image downloads on the main page.  Those are great if you're looking to install Solaris 11 on hardware, and we hope you will.  But if you take the time to look down the page, you'll find a link off to the Oracle Solaris 11 Virtual Machine downloads.  There are two downloads there:A pre-built Solaris 10 zoneA pre-built Solaris 11 VM for use with VirtualBoxIf you're looking to try Solaris 11 on x86, the second one is what you want.  Of course, this assumes you have VirtualBox already (and if you don't, now's the time to try it, it's a terrific free desktop virtualization product).  Once you complete the 1.8 GB download, it's a simple matter of unzipping the archive and a few quick clicks in VirtualBox to get a Solaris 11 desktop booted.  While it's booting, you'll get to run through the new system configuration tool (that'll be the subject of a future posting here) to configure networking, a user account, and so on.So what about that pre-built Solaris 10 zone download?  It's a really simple way to get yourself acquainted with the Solaris 10 zones feature, which you may well find indispensible in transitioning an existing Solaris 10 infrastructure to Solaris 11.  Once you've downloaded the file, it's a self-extracting executable that'll configure the zone for you, all you have to supply is an IP address for the zone.  It's really quite slick!I expect we'll do a lot more pre-built VM's and zones going forward, as that's a big part of being a cloud OS; if there's one that would be really useful for you, let us know.

    Read the article

  • Can't find netbooted for Kerrighed pxe boot with Ubuntu Lucid Server

    - by Pengin
    I'm following installation guides for pxe booting and kerrighed. I can't find the package nfsbooted for Ubuntu 10.04. Where did it go? Context: At work I have access to 8 mini-ITX PCs and am trying to build a cluster. My plans include trying Condor, GridGain, Hadoop, and recently Kerrighed has caught my eye. (I reaslise these are all for different kinds of things, I'm just evaluating). Ideally, I'd like to have all the nodes network boot from a single server, since that seems so much easier to manage, plus I can 'borrow' additional PCs for a while without touching their HD. I've been getting on great with Ubuntu Lucid Server (10.04), trying to follow the only guides I can find to get pxe booting (and ultimately kerrighed) to work. This guide is for Ubuntu 8.04 and this one is for Debian. They both refer to a package I can't seem to find, nfsbooted. Has this package been replaced? Am I doing something daft?

    Read the article

  • How to get the Three.js import/export scripts into Blender on Ubuntu?

    - by Bane
    I have been working with 3D primitives in Three.js, but now I want to import some models. I plan on using Blender, which I have just downloaded with: sudo apt-get install blender However, I was instructed to put the import/export scripts in the .blender/2.62/scripts/addons folder, but it does not exist! .blender/2.62 does exist, but it only has a config folder. The next thing I did is manually changed the script search path in Blender's preferences from // to my homefolder/scripts, which contained the required io_mesh_threejs folder (which, in turn had the .py scripts inside). I saved the changes, restarted Blender, but still nothing: in the menu there is no mention of Three.js at all! What do I do? It would be great if I knew the installation path for Blender, because maybe I could put those scripts there manually. Where should it be installed? EDIT: these are the scripts I'm talking about, along with the instructions: https://github.com/mrdoob/three.js/tree/master/utils/exporters/blender.

    Read the article

  • Launch elasticsearch dockerfile using my own elasticsearch.yml

    - by Kevin
    I am launching elasticsearch via a dockerfile found here: https://index.docker.io/u/ehazlett/elasticsearch/ It works great. I need to define my own hosts as my environment does not support multicast of any kind. I understand that my options are: 1) supply hosts when elasticsearch is run as a command line parameter 2) modify my elasticsearch.yml file to set the hosts. I know how to build the yml, what I need to know is how to launch elasticsearch via docker using my own yml instead of the one in the container. Is that possible? Thanks.

    Read the article

  • What's a nice explanation for pointers?

    - by Macneil
    In your own studies (on your own, or for a class) did you have an "ah ha" moment when you finally, really understood pointers? Do you have an explanation you use for beginner programmers that seems particularly effective? For example, when beginners first encounter pointers in C, they might just add &s and *s until it compiles (as I myself once did). Maybe it was a picture, or a really well motivated example, that made pointers "click" for you or your student. What was it, and what did you try before that didn't seem to work? Were any topics prerequisites (e.g. structs, or arrays)? In other words, what was necessary to understand the meaning of &s and *, when you could use them with confidence? Learning the syntax and terminology or the use cases isn't enough, at some point the idea needs to be internalized. Update: I really like the answers so far; please keep them coming. There are a lot of great perspectives here, but I think many are good explanations/slogans for ourselves after we've internalized the concept. I'm looking for the detailed contexts and circumstances when it dawned on you. For example: I only somewhat understood pointers syntactically in C. I heard two of my friends explaining pointers to another friend, who asked why a struct was passed with a pointer. The first friend talked about how it needed to be referenced and modified, but it was just a short comment from the other friend where it hit me: "It's also more efficient." Passing 4 bytes instead of 16 bytes was the final conceptual shift I needed.

    Read the article

  • Recommendations for a JetDirect print server for USB 2.0 printers?

    - by eleven81
    I have been using some older HP JetDirect 300x print servers for a variety of parallel printers over the years. These things work great for every printer I have tried them with, including HP's, Dell's, and even a Mountbatten braille embosser! These have been a boon for printers whose internal network cards fail, but whose parallel ports continue working. I don't have to throw away the $500 printer that is one year and a week old, and can keep using it for many, many years. Now that very few printers are coming with parallel ports, but are coming solely with USB connections and network cards. When the network card fails but the printer is still usable, I want to continue using it on the network with a JetDirect card. In summary: Does anyone have any recommendations for JetDirect cards that will work as well with USB 2.0 printers of unspecified manufacturer that my old JetDirect 300x cards do?

    Read the article

  • How to define type-specific scripts when using a 'type object' programming pattern?

    - by Erik
    I am in the process of creating a game engine written in C++, using the C/C++ SQLite interface to achieve a 'type object' pattern. The process is largely similar to what is outlined here (Thank you Bob Nystrom for the great resource!). I have a generally defined Entity class that when a new object is created, data is taken from a SQLite database and then is pushed back into a pointer vector, which is then iterated through, calling update() for each object. All the ints, floats, strings are loaded in fine, but the script() member of Entity is proving an issue. It's not much fun having a bunch of stationary objects laying around my gameworld. The only solutions I've come up with so far are: Create a monolithic EntityScript class with member functions encompassing all game AI and then calling the corresponding script when iterating through the Entity vector. (Not ideal) Create bindings between C++ and a scripting language. This would seem to get the job done, but it feels like implementing this (given the potential memory overhead) and learning a new language is overkill for a small team (2-3 people) that know the entirety of the existing game engine. Can you suggest any possible alternatives? My ideal situation would be that to add content to the game, one would simply add a script file to the appropriate directory and append the SQLite database with all the object data. All that is required is to have a variety of integers and floats passed between both the engine and the script file.

    Read the article

  • TechEd India Recap

    Last week I spoke at TechEd India and the Visual Studio 2010 launch. It was a great event with over 3,000 attendees, far more than attended the Visual Studio launch in Las Vegas! Senior vice president of the Developer Division at Microsoft, S. Somasegar, did an awesome keynote. I did three sessions: Building applications with OData Building Line of Business Applications with WCF and Silverlight 4.0 Building Silverlight Business Applications with RIA Services Something happened to me that never happened before in my 13+ years of being a professional international speaker: I lost my voice! I was in Indonesia the week before and picked up a small bug and had no voice. This was a new challenge for me, and Microsoft offered to cancel my sessions, but I figured why would I let no voice stop me from delivering my sessions!! So I decided to throw away the slides and just code. I had no voice anyway, ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Collision detection in 3D space

    - by dreta
    I've got to write, what can be summed up as, a compelte 3D game from scratch this semester. Up untill now i have only programmed 2D games in my spare time, the transition doesn't seem tough, the game's simple. The only issue i have is collision detection. The only thing i could find was AABB, bounding spheres or recommendations of various physics engines. I have to program a submarine that's going to be moving freely inside of a cave system, AFAIK i can't use physics libraries, so none of the above solves my problem. Up untill now i was using SAT for my collision detection. Are there any similar, great algorithms, but crafted for 3D collision? I'm not talking about octrees, or other optimalizations, i'm talking about direct collision detection of one set of 3D polygons with annother set of 3D polygons. I thought about using SAT twice, project the mesh from the top and the side, but then it seems so hard to even divide 3D space into convex shapes. Also that seems like far too much computation even with octrees. How do proffessionals do it? Could somebody shed some light.

    Read the article

  • Using Native Drag and Drop in HTML 5 pages

    - by nikolaosk
    This is going to be the eighth post in a series of posts regarding HTML 5. You can find the other posts here, here , here , here, here , here and here. In this post I will show you how to implement Drag and Drop functionality in an HTML 5 page using JQuery.This is a great functionality and we do not need to resort anymore to plugins like Silverlight and Flash to achieve this great feature. This is also called a native approach on Drag and Drop.I will use some events and I will write code to respond when these events are fired.As I said earlier we need to write Javascript to implement the drag and drop functionality. I will use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/downloadI will create a simple HTML page.There will be two thumbnails pics on it. There will also be the drag and drop area where the user will drag the thumb pics into it and they will resize to their actual size. The HTML markup for the page follows<!doctype html><html lang="en"><head><title>Liverpool Legends Gallery</title><meta charset="utf-8"><link rel="stylesheet" type="text/css" href="style.css"><script type="text/javascript" charset="utf-8" src="jquery-1.8.1.min.js"></script>  <script language="JavaScript" src="drag.js"></script>   </head><body><header><h1>A page dedicated to Liverpool Legends</h1><h2>Drag and Drop the thumb image in the designated area to see the full image</h2></header><div id="main"><img src="thumbs/steven-gerrard.jpg"  big="large-images/steven-gerrard-large.jpg" alt="John Barnes"><img src="thumbs/robbie-fowler.jpg" big="large-images/robbie-fowler-large.jpg" alt="Ian Rush"><div id="drag"><p>Drop your image here</p> </div></body></html> There is nothing difficult or fancy in the HTML markup above. I have a link to the external JQuery library and another javascript file that I will implement the whole drag and drop functionality.The code for the css file (style.css) follows#main{  float: left;  width: 340px;  margin-right: 30px;}#drag{  float: left;  width: 400px;  height:300px;  background-color: #c0c0c0;}These are simple CSS rules. This post cannot be a tutorial on CSS.For all these posts I assume that you have the basic HTML,CSS,Javascript skills.Now I am going to create a javascript file (drag.js) to implement the drag and drop functionality.I will provide the whole code for the drag.js file and then I will explain what I am doing in each step.$(function() {          var players = $('#main img');          players.attr('draggable', 'true');                    players.bind('dragstart', function(event) {              var data = event.originalEvent.dataTransfer;               var src = $(this).attr("big");              data.setData("Text", src);               return true;          });          var target = $('#drag');          target.bind('drop', function(event) {            var data = event.originalEvent.dataTransfer;            var src = ( data.getData('Text') );                         var img = $("<img></img>").attr("src", src);            $(this).html(img);            if (event.preventDefault) event.preventDefault();            return(false);          });                   target.bind('dragover', function(event) {                if (event.preventDefault) event.preventDefault();            return false;          });           players.bind('dragend', function(event) {             if (event.preventDefault) event.preventDefault();             return false;           });        });   In these lines var players = $('#main img'); players.attr('draggable', 'true');We grab all the images in the #main div and store them in a variable and then make them draggable.Then in following lines I am using the dragstart event.  players.bind('dragstart', function(event) {              var data = event.originalEvent.dataTransfer;               var src = $(this).attr("big");              data.setData("Text", src);               return true;          }); In this event I am associating the custom data attribute value with the item I am dragging.Then I create a variable to get hold of the dropping area var target = $('#drag'); Then in the following lines I implement the drop event and what happens when the user drops the image in the designated area on the page. target.bind('drop', function(event) {            var data = event.originalEvent.dataTransfer;            var src = ( data.getData('Text') );                         var img = $("<img></img>").attr("src", src);            $(this).html(img);            if (event.preventDefault) event.preventDefault();            return(false);          }); The dragend  event is fired when the user has finished the drag operation        players.bind('dragend', function(event) {             if (event.preventDefault) event.preventDefault();             return false;           }); When this method event.preventDefault() is called , the default action of the event will not be triggered.Please have a look a the picture below to see how the page looks before the drag and drop takes place. Then simply I drag and drop a picture in the dropping area.Have a look at the picture below It works!!! Hope it helps!!  

    Read the article

  • NDC Oslo

    - by Alan Smith
    Originally posted on: http://geekswithblogs.net/asmith/archive/2013/06/14/153136.aspx2013 has been a hectic year for conference presentations so far, NDC in Oslo has been the 6th conference I have attended, and my session there was my 11th conference presentation this year. I have been meaning to make the short trip over from Stockholm to NDC for a few years, and this was the first time I made it. I have heard a lot of great things about the event, and was impressed with the location, the sessions, and most of all the atmosphere around the event boots and during the party on Thursday evening. The session I was delivering was my “Grid Computing with 256 Windows Azure Worker Roles & Kinect” demo, which I have delivered at many events over the past 12 months. The demo went fine. I’m always a little nervous when I try to scale out the application to 256 worker roles, it almost always works well and the application will scale in minutes, but very occasionally there can be a longer delay due to the provisioning process in the Windows Azure data centers. This would not be an issue for many scenarios, but when standing on stage in front of a room full of developers you really want things to run smoothly. A number of people have suggested that I should pre-provision an environment so that it is guaranteed to be there when I run the demo during a session. For me the aim has always been to show the rapid scalability on cloud-based platforms live on stage. Pre-provisioning an environment may make for a more reliable demo but to me that would be cheating, and not half as much fun!

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • SharePoint Saturday DC 2010 Slides, Demo Scripts, and Pictures

    - by Brian Jackett
    Wow! This past weekend I attended SharePoint Saturday Washington DC (SPSDC) which was quite an event to say the least.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my fifth SharePoint Saturday attended and fourth I’ve spoken at.  SPSDC was a bit different than most SharePoint Saturdays mostly due to the scale of it.  We had almost 950 attendees, over 80 speakers presenting close to 90 sessions, and dozens of sponsors.  A big thanks goes out to the organizers of this event.  They put in a lot of hard work and time to pull this event off and should be very proud of the end result.      For SPSDC I presented “The Power of PowerShell + SharePoint 2007”.  I want to thank all of the attendees of my session for coming and asking some great questions.  Below you can find the slides and demo scripts for this session.  I also took some photos throughout the day (not as many as usual since so much going on) so check them out.  If you have any follow up questions feel free to drop me a line in the comments or the contact link at the top of the site.   Slides and Scripts Click here for the demo scripts and slides posted on my SkyDrive. VERY IMPORTANT NOTE: One thing I forgot to mention in my presentation.  In order to run code against the SharePoint API you need to load the Microsoft.SharePoint.dll assembly first.  Run the below command on the PowerShell console line to complete that:   [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") Photos Facebook album -or- My album on Windows Live site (higher res shots). View Full Album         -Frog Out

    Read the article

  • haproxy - pass original / remote ip in tcp mode

    - by Vito Botta
    I've got haproxy set up with keepalived for load balancing and ip failover of a percona cluster, and since it works great I'd like to use the same lb / failover for another service/daemon. I've configured haproxy this way: listen my_service 0.0.0.0:4567 mode tcp balance leastconn option tcpka contimeout 500000 clitimeout 500000 srvtimeout 500000 server host1 xxx.xxx.xxx.xx1:4567 check port 4567 inter 5000 rise 3 fall 3 server host2 xxx.xxx.xxx.xx2:4567 check port 4567 inter 5000 rise 3 fall 3 The load balancing works fine, but the service sees the IP of the load balancer instead of the actual IPs of the clients. In http mode it's quite easy to have haproxy pass along the remote IP, but how do I do in tcp mode? This is critical due to the nature of the service I need to load balance. Thanks! Vito

    Read the article

  • getaddrinfo: command not found

    - by jebbie
    I've installed a new Ubuntu 12.04 on an AWS EC2 instance and everything worked fine till now. I followed the instructions in this great tutorial: http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ Now i'm on the point "installing monit" and when i restart the service i get this error message now: monit: Cannot translate '(none)' to FQDN name -- Name or service not known I started googling and someone is writing there, that monit uses getaddrinfo in his startup-process to determine the hostname. Ok, so i thought i try out on myself what is getaddrinfo delivering, and then i got: getaddrinfo: command not found I guess, something is missing on my system. Can anyone help?

    Read the article

  • Where&rsquo;s my start button?

    - by Dennis Vroegop
    I have to be honest here for a moment. The one thing people most complain about when they talk about Windows 8 is that they miss the Start Button. You know, that dreaded thing that everybody hated when it was introduced… I usually don’t go into these kinds of discussions unless I am personally involved but this one I cannot let go. Why are people doing this? Windows 8 is a great OS. They have changed, updated and perfected so many things so there is enough to talk or write about. Yet, all articles or discussions come down to “Where’s my start button?” In order to save myself from having to explain this every single time I wrote this post and from now on I will simply refer to this blog when I get asked that question. Here it is. Your start menu is there. It’s right in front of your nose. It’s two dimensional, it’s got huge buttons (although they are more than just buttons, they’re alive and therefore called Live Tiles). Just go through those tiles and click what ever you want to start up. Don’t want to look for an item? Just start typing. Really it is that simple. When you are on the start screen just start typing (part of) the name of the program you want and you’ll find it.  As you see in the attached example I started typing “word” and it found Word, Wordfeud, Wordament etc. If you want to find something else besides a program (say you want to change the region you’re in) just click on Settings (it will already show you how many hits there are in that section). People, my request is: dive into something before you complain about it. Look around. This feature is so much easier to use than the old stuff. But you have to know about it. So. I won’t get into this discussion anymore.

    Read the article

  • File access with hostname or ip only - no domain?

    - by Jonathon
    It seems likely that this is an obvious question, but I'm having trouble tracking down any useful information. Normally when accessing files in a particular directory on a server, I'm able to create a virtual host, assign a domain, root directory location, etc -- however am in a situation where I have server space and need to access files with only a hostname. Is this possible? For example, let's say the hostname is 123hostname.com, and the file I want access to is in /home/sub-directory/filename.php. How do I get at it via a browser? I've tried: http://123hostname.com/home/sub-directory/filename.php ...and some other variations on that theme (that I can't post because new users are restricted to one link in messages). But generally stuck. Any help -- even if it's just to let me know that this isn't possible without some additional configuration -- would be great. Thank you!

    Read the article

  • Syncing sharepoint 2010 with outlook

    - by uruit
    Technorati Tags: Sharepoint 2010,Outlook SharePoint offers the possibility to connect to content in a Document Library directly from Outlook, edit the documents offline and then sync when connection is restored. This is very useful if we are working at home and we want to access a shared document (ex. VPN connection settings) or continue working directly on a file. Steps to configure the connection: 1. Browse online to SharePoint Document library you want to connect and click on "Connect to Outlook": (click to enlarge) 2. Click Allow to confirm: 3. In Outlook you will see the documents as outlook email items with the ability to preview them. When a document is updated, Outlook will notify you that you have items unread. If you want to edit a file, the corresponding office tool (Word, Excel, PowerPoint) will ask if you want to update the server after saving a change, it is really straightforward. (click to enlarge) 4. Finally, I recommend to add the IP address of your SharePoint server in the secure sites in order to prevent Outlook to ask for your windows credentials every time you open Outlook: (click to enlarge) Outlook is a great tool, letting you work in a really integrated way, don't miss this amazing feature. This feature is also available in SharePoint Online :)   Post by: Marcelo Martinez UruIT (www.uruit.com/sharepoint_outsourcing.html) Leaders in Nearshore Outsourcing from South America

    Read the article

  • Arguments for a coding standard?

    - by acidzombie24
    A few friends and i are planning to work on a project together and we want a COMPLETELY DIFFERENT coding standard. We do NOT want to use the coding standard the libraries/language uses. Its our project and we want to mess around. So i came here to ask what you guys think are good standards and arguments for it (or what not to do and arguments against it). The styles i remember most are Upper casing the entire word Camel and Pascal casing Using '_' to separate each word pre or postfixing letters or words (i hate m for member but i think IsCond() is a good func name. SomethingException as a postfix example) Using '_' at the start or end of words Brace placement. On a new or same line? I know of libs that use Pascal casing on all public and protected members. But would you ever get confused if something is a func, var or even property if the lang supports it? What about if you decide a public member to be private (or vice versa) wouldnt that great a lot of fix up work or inconsistencies? Is prefixing C to every class a good idea? I ask what do you think and why?

    Read the article

  • Wake laptop when lid is opened?

    - by crackout12
    I have a SAMSUNG laptop, which by the experience in the last months, has been a great one. I am actually implementing some functionality to it, and since I noticed, I am able to wake up my laptop from sleep by just opening the lid on Windows 7, however, I need to press to power button to wake up in Ubuntu. Using a program called i-nex, I noted that the kernel DOES dettect a "lid switch", and I am looking forward to use it to wake up function. Any ideas? UPDATE: Thanks @yossile for bringing up some clues! However, output of command cat /proc/acpi/wakeup does not show the LID device. I still tried the second set of commands you gave me with no effect. Then I tried experimenting, guessing that LID should be named by other value, so I tried enabling the others. No victory so far. But I noticed that devices that did not have any pci listings were disabled at all time no matter what I tried. Here is the output of command cat /proc/acpi/wakeup: root@samsung:~# cat /proc/acpi/wakeup Device S-state Status Sysfs node PCE4 S4 *disabled pci:0000:00:04.0 SBAZ S4 *disabled pci:0000:00:14.2 P0PC S4 *disabled pci:0000:00:14.4 GEC S4 *disabled PE20 S4 *disabled pci:0000:00:15.0 PE21 S4 *disabled PE22 S4 *disabled pci:0000:00:15.2 PE23 S4 *disabled PWRB S5 *enabled So maybe it could be that LID is either GEC,PE21, or PE23? Still, there is /proc/acpi/button/lid/LID/state file which shows that LID is opened. Any more ideas?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >