Search Results

Search found 13404 results on 537 pages for 'george host'.

Page 376/537 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Samba new file ownership, permissions configuration

    - by Martin Melka
    I have recently installed Samba on my server. Now I have a question about permissions and how to set it up. Currently I mount the Samba shared drive to my laptop with this line in /etc/fstab: //<host>/share /mnt/melka-server-data/ cifs username=<usrname> password=<passwd> _netdev 0 0 This works, as I can read from the files and create them (as root). The problem is when I want to create files as a regular user. I always get a Permission Denied error. These are ll outputs of the mounted folder: magicmaster@magicmaster-kubuntu:/mnt$ ll total 8 drwxr-xr-x 3 root root 4096 lis 11 14:15 ./ drwxr-xr-x 26 root root 4096 ríj 26 11:01 ../ drwxrwxrwx 8 magicmaster magicmaster 0 lis 12 22:12 melka-server-data/ and the inside: magicmaster@magicmaster-kubuntu:/mnt/melka-server-data$ ll total 4 drwxrwxrwx 8 magicmaster magicmaster 0 lis 12 22:12 ./ drwxr-xr-x 3 root root 4096 lis 11 14:15 ../ drwxrwxrwx 5 magicmaster magicmaster 0 lis 12 09:35 downloads/ drwxrwxrwx 2 magicmaster magicmaster 0 ríj 28 12:57 lost+found/ drwxrwxrwx 15 magicmaster magicmaster 0 lis 12 09:45 movies/ drwxrwxrwx 2 magicmaster magicmaster 0 lis 1 21:15 newest/ drwxrwxrwx 3 magicmaster magicmaster 0 lis 2 23:14 photos/ drwxrwxrwx 2 magicmaster magicmaster 0 ríj 30 12:44 software/ -rw-r--r-- 1 nobody nogroup 0 lis 12 22:12 zdar I called sudo chown -R magicmaster:magicmaster melka-server-data/ to try and change all the files to belong to me. Then the file zdar was created by magicmaster just by calling touch. I got the Permission Denied, but it was still created, though it belongs to nobody and I can't write into it. When I create a file as root, it still belongs to nobody, but at least I can write into it. What am I missing? I didn't notice anything in Samba config that would be related to this and I don't like the idea of having to log on as root in order to copy files.. Thanks

    Read the article

  • Confusion about git; how to undo?

    - by dan
    I wanted to install some source code that was on git. Don't really know what that means, I've never used git before, but I figured it was time to learn so, I first installed git. Next I tried to clone the git directory of the software I want to install. I got a message saying "the authenticity of can't be established". I went ahead and ended up with another message saying warning such and such will be added to known hosts. I went ahead and it said something about hanging up on the connection. After searching the internet for a while I realized I didn't need git to install the software but now I have it installed and have added some host to some file or another. I'm concerned I've created some security issues I need to fix. Can anyone help me undo what I've done, or better understand what I've done. Did adding a git project open up my system? Beyond that can anyone tell me how git works. Everything I've found assumes I know stuff that I don't yet. Thanks

    Read the article

  • Correct nvidia+intel graphics setup in 14.04

    - by Espressofa
    Just upgraded to 14.04 to try to fix some other issues. Now, something has gone wrong with my graphics. I have a Thinkpad T530 with Intel and Nvidia graphics cards. $ inxi -SGx System: Host: xyz Kernel: 3.13.0-24-generic x86_64 (64 bit, gcc: 4.8.2) Desktop: N/A Distro: Ubuntu 14.04 trusty Graphics: Card-1: Intel 3rd Gen Core processor Graphics Controller bus-ID: 00:02.0 Card-2: NVIDIA GF108M [NVS 5400M] bus-ID: 01:00.0 X.Org: 1.15.1 drivers: fbdev,vesa,intel,nouveau (unloaded: nvidia) Resolution: [email protected] GLX Renderer: N/A GLX Version: N/A Direct Rendering: N/A $ glxinfo name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". I'm not sure what I did but now something is wrong with my graphics, as should be visible from the above commands. nvidia-detector says "none" as well. I used to have bumblebee but then some website said to remove it and now something's clearly wrong. What's the right way to set things up? Should I try to add bumblebee back? Here's what's installed now: $ dpkg --get-selections | grep nvidia nvidia-319 install nvidia-331 install nvidia-libopencl1-331 install nvidia-opencl-icd-331 install nvidia-prime install nvidia-settings install nvidia-settings-319 install

    Read the article

  • Basic AppFabric Service Bus Programming Lifecycle

    - by kaleidoscope
    The tasks required to create an application that access the AppFabric Service Bus are as follows: Create a service namespace. This service namespace contains the resources used by the AppFabric Service Bus to support the application. Define the AppFabric Service Bus contract. A contract specifies the signature of the service, the data it exchanges, and other required inputs, behavior specifications, and object invariants. Implement the contract. To implement a service contract, create a class that implements the interface and specify custom runtime behaviors. Configure the service by specifying endpoint and other behavior information. Build and run the service. Build and run the client application. As with any iterative, service-oriented software development, it may not always be appropriate to follow the preceding steps sequentially, or even start from step 1. For example, if you want to build a client for a pre-existing service, you start at step 5. Or, if you are building a host service that others will use, you can skip step 6. Source: http://msdn.microsoft.com/en-us/library/ee173580.aspx   Sarang, K

    Read the article

  • pppoe connection to dsl model

    - by VJo
    Hello, I am connecting to the internet through a pppoe connection, but for some reason I can not connect to my modem (it's address is 192.168.1.1). Before I set my pppoe connection, I could connect. So, is there a way? EDIT The output of ifconfig is : r@PlaviZec:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:13:d4:f7:02:d4 inet6 addr: fe80::213:d4ff:fef7:2d4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2811 errors:0 dropped:0 overruns:0 frame:0 TX packets:2801 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2538831 (2.5 MB) TX bytes:448591 (448.5 KB) Interrupt:21 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:28 errors:0 dropped:0 overruns:0 frame:0 TX packets:28 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1600 (1.6 KB) TX bytes:1600 (1.6 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:92.229.42.177 P-t-P:213.191.64.59 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:2794 errors:0 dropped:0 overruns:0 frame:0 TX packets:2741 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:2476277 (2.4 MB) TX bytes:381240 (381.2 KB)

    Read the article

  • How do I backup my customer's data?

    - by marcamillion
    If you run a SaaS app, or work on one, I would love to hear from you. Where the safety and security of your customer's data is paramount, how do you secure it and back it up? I would love to know your main host (e.g. Heroku, Engine Yard, Rackspace, MediaTemple, etc.) and who you use for your backup. Be as detailed as possible - e.g. a quick overview of your service and the data you store (images for instance), what happens with the images when the user uploads them (e.g. they go to your Linode VPS, and posted to the site for them to see - then they are automatically sent to AWS or wherever, then once a week they are backed up to tape by the managed hosting provider, and you also back them up to your house/office). If you could also give some idea as to what the unit cost (per GB/per user/per month) of storage is - on average, I would really appreciate that. Getting ready to launch my app, and I would love to get some more perspective on the nitty gritty details involved. Thanks!

    Read the article

  • Not able to connect to local network

    - by Roopesh
    I have installed Kubuntu , I am able to connect to Internet and able to access external sites, but in my local network i have bugzila installed that i am not able to access, even i am not able to ping the gateway also 192.168.1.1 . below is the result of ifconfig command Please help . Thanks ~# ifconfig eth0 Link encap:Ethernet HWaddr b8:70:f4:da:f9:a8 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:42 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:884 errors:0 dropped:0 overruns:0 frame:0 TX packets:884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:81131 (81.1 KB) TX bytes:81131 (81.1 KB) wlan0 Link encap:Ethernet HWaddr 68:5d:43:2e:1c:79 inet addr:192.168.1.26 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6a5d:43ff:fe2e:1c79/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2446 errors:0 dropped:0 overruns:0 frame:0 TX packets:2324 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1909441 (1.9 MB) TX bytes:393292 (393.2 KB)

    Read the article

  • CMS for coding blog

    - by OrgnlDave
    I've got a server with a LAMP stack and such. I'd like to host a blog-type site (or if there's a free place good for this, that would be cool!) that covers a variety of tutorials, interesting content, etc. There are tons of CMS's out there but if you search for tips on ones that do programming type things well, you get tons of hits about web development. I'd like to know if anyone here has recommendations from actually using a CMS for this type of thing or, short of that, can recommend one - not based on generalities like "Joomla! is great!" I'm looking for the least setup time possible. I'm proficient with CSS and I can design a color scheme, so that's not a big problem. As you can expect, attaching files, pictures, and syntax highlighting are musts (C/C++ ish is good). Ability to group posts, perhaps use tags, etc. would be cool too, but not necessary. As I'm writing this, it almost sounds like it'd be easier to custom-code a small PHP site myself.

    Read the article

  • Transmission shutdown script for multiple torrents?

    - by Khurshid Alam
    I have written a shutdown script for transmission. Transmission calls the script after a torrent download finishes. The script runs perfectly on my machine (Ubuntu 11.04 & 12.04). #!/bin/bash sleep 300s # default display on current host DISPLAY=:0.0 # find out if monitor is on. Default timeout can be configured from screensaver/Power configuration. STATUS=`xset -display $DISPLAY -q | grep 'Monitor'` echo $STATUS if [ "$STATUS" == " Monitor is On" ] ### Then check if its still downloading a torrent. Couldn't figure out how.(May be) by monitoring network downstream activity? then notify-send "Downloads Complete" "Exiting transmisssion now" pkill transmission else notify-send "Downloads Complete" "Shutting Down Computer" dbus-send --session --type=method_call --print-reply --dest=org.gnome.SessionManager /org/gnome/SessionManager org.gnome.SessionManager.RequestShutdown fi exit 0 The problem is that when I'm downloading more than one file, when the first one finishes, transmission executes the script. I would like to do that but after all downloads are completed. I want to put a 2nd check ( right after monitor check) if it is still downloading another torrent. Is there any way to do this?

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • Using PDO with MVC

    - by mister martin
    I asked this question at stackoverflow and received no response (closed as duplicate with no answer). I'm experimenting with OOP and I have the following basic MVC layout: class Model { // do database stuff } class View { public function load($filename, $data = array()) { if(!empty($data)) { extract($data); } require_once('views/header.php'); require_once("views/$filename"); require_once('views/footer.php'); } } class Controller { public $model; public $view; function __construct() { $this->model = new Model(); $this->view = new View(); // determine what page we're on $page = isset($_GET['view']) ? $_GET['view'] : 'home'; $this->display($page); } public function display($page) { switch($page) { case 'home': $this->view->load('home.php'); break; } } } These classes are brought together in my setup file: // start session session_start(); require_once('Model.php'); require_once('View.php'); require_once('Controller.php'); new Controller(); Now where do I place my database connection code and how do I pass the connection onto the model? try { $db = new PDO('mysql:host='.DB_HOST.';dbname='.DB_DATABASE.'', DB_USERNAME, DB_PASSWORD); $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch(PDOException $err) { die($err->getMessage()); } I've read about Dependency Injection, factories and miscellaneous other design patterns talking about keeping SQL out of the model, but it's all over my head using abstract examples. Can someone please just show me a straight-forward practical example?

    Read the article

  • Hosting several HTTP servers on single domain name

    - by Nakilon
    Several people have got a single domain name server.company.com server, where they are now supposed to host their infrastructure or temporal projects, written in different ways even in different programming languages. How do they divide the domain? Split into subdomains: john.server.company.com, kate.server.company.com, etc. This would need a lot of admins' assistance, time, etc. -- there would be no way for John and Kate to do it themselves. Split into url namespaces: server.company.com/john/, server.company.com/kate/, etc. Pro: They now can make a single welcome page at root with any additional info (if they need?) Con: Each server would need to know their namespace string constant, and hrefs like / whould need patching. Split into ports: server.company.com:8080, server.company.com:8081, etc. and make a single :80 welcome page. Pro: They still can make a single welcome page at :80 Con: ??? I would like to know more pros and cons for 2 and 3 solution.

    Read the article

  • virtual install from ISO not getting virtual kernel

    - by Pete
    I have a KVM host (12.04.5) that I have been installing guests on in variety of ways. I just noticed recently one of my guests was running a generic kernel when I'm fairly certain I specified minimum virtual machine during install from a 12.04.2 server iso. From what I understand it should be running a stripped down kernel "optimized" for VMs. I set up another server to test, this time using a 14.04.1, and sure enough I ended up with uname -r returning 3.13.0-32-generic. It seems that if I use an .iso to install, I end up with generic regardless. However building with the vmbuilder ... --flavour virtual --suite precise ... (I don't have trusty available yet) script gives me an ubuntu 12.04.5 LTS system running kernel 3.2.0-67-virtual. The server FAQ mentions I should be getting the virtual kernel. What are practical advantages of using linux-image-virtual kernel? gives me the impression that it doesn't really matter functionally (in my case I only have a couple VMs running). I first thought was maybe I was somehow not applying the correct options because the installer F4 menu doesn't really give great feedback if the mode has been selected or not. Looking in the log /var/log/installers/syslog I see Command line: file=/cdrom/preceed/ubuntu-server-minimalvm.seed ... I know that I can install the virtual kernel package down the road, but why am I not, or should I be getting the virtual flavor of kernel from an ISO install when doing an minimum VM install?

    Read the article

  • My site has crashed .. anyone have some info ?

    - by marwan
    Hi all , I booked a domain name for my website from a hosting provider .I gave the domain name , along with ftp details to a freelancer to develop the site in wordpress . the freelancer developped and he got full payment , and the site and site was working fine ,etc .. From that time , I did not change the admin logging as well as ftp details , this means that such info is still known to the freelancer .. A week ago , I found that some links in my site was not working .. I sent him a mail about this , and he said that he will fix it if i give him ftp details . and I did so , next I found that the entire site is gone . then he sent me a mail , without I asked him , and he he said that there have been someone who got access to my server , and he removed all files of my site and he installed drupal instead .and that he can rebuild the site in one day , by charging a full fee of 250 usd again .. Can anyone know what I can do in this situation , to find who did such act , could it be the host provider or that freelancer ,, and if there is a possibility to have my site back top the server .. I will appreciate any info on this.. Regards , Thanks

    Read the article

  • how to create a mirror of minimum size to install Ubuntu

    - by Registered User
    I need to create an http url at my laptop to have a Ubuntu installation begin within my laptop on a Xen environment. This is how the final thing will look like http://bderzhavets.wordpress.com/2008/10/28/install-ubuntu-intrepid-server-pv-domu-at-xen-33-port-via-httpgetco-centos-52-dom0/ the host and client are both going to be my laptop, I Google d and came across apt-mirror and some other packages. I do not want to archive entire 15 GB Ubuntu repositories on my machine. It is not possible to use a CD,ISO,loop mounted disk (reason mentioned below). I have tried using netboot image on local machine which failed because if you are attempting to create a virtual machine on a hardware which does not support VT virt-manager installer necessarily needs a URL of this sort http://archive.ubuntu.com/ubuntu/dists/hardy/main/installer-i386/current/images/netboot/ any other option to create guest OS is simply grayed out. The unfortunate part is my Ethernet connections do not work when I boot with Xen-4.0 and a pv-ops Dom0 kernel from Jeremy's tree.Which is where I have to do this work.So I have to create a URL structure which is similar to Ubuntu mirrors.So how can I do this in bare minimum so that at least the console boots and once the console comes I can do some work.

    Read the article

  • Wireless will not connect on an Asus U56? [closed]

    - by ernie
    I have an ASUS U56. I could connect to internet without delay or problems in Ubuntu 11.04. Now, running Ubuntu 11.10, I have not been able to connect. I can see the network and the computer tries to connect, but never makes the connection. I have been trying to fix this problem since the release date of 11.10. I had to install the older kernel 2.6 to get the wifi to work. Any other solutions? ernest@ernest-U56E:~$ ifconfig eth0 Link encap:Ethernet HWaddr 14:da:e9:25:6c:d0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:53 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 40:25:c2:3f:46:d4 inet addr:192.168.xx.xxx Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::4225:c2ff:fe3f:46d4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:44962 errors:0 dropped:0 overruns:0 frame:0 TX packets:32287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:35614976 (35.6 MB) TX bytes:5400816 (5.4 MB) ernest@ernest-U56E:~$

    Read the article

  • How to write PowerShell code part 3 (calling external script)

    - by ybbest
    In this post, I’d like to show you how to calling external script from a PowerShell script. I’d like to use the site creation script as an example. You can download script here. 1. To call the external script, you need to first to grab the script path. You can do so by calling $scriptPath = Split-Path $myInvocation.MyCommand.Path to grab the current script path. You can then use this to build the path for your external script path. $scriptPath = Split-Path $myInvocation.MyCommand.Path $ExternalScript=$scriptPath+"\CreateSiteCollection.ps1" $configurationXmlPath=$scriptPath+"\SiteCollection.xml" [xml] $configurationXml=Get-Content $configurationXmlPath & "$ExternalScript" $configurationXml Write-Host 2.If you like to pass in any parameters , you need to define your script parameters in param () at the top of the script and separate each parameter by a comma (,) and when calling the method you do not need comma (,) to separate each parameter. #Pass in the Parameters. param ([xml] $xmlinput)

    Read the article

  • How to write PowerShell code part 2 (Using function)

    - by ybbest
    In the last post, I have showed you how to use external configuration file in your PowerShell script. In this post, I will show you how to create PowerShell function and call external PowerShell script.You can download the script here. 1. In the original script, I create the site directly using New-SPSite command. I will refactor it so that I will create a new function to create the site using New-SPSite. The PowerShell function is quite similar to a C# method. You put your function parameters in () and separate each parameter by a comma (,). Then you put your method body in {}. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } 2. The difference is you do not need semi-colon (;) at the end of each statement and when calling the method you do not need comma (,) to separate each parameter. function add ([int] $num1 , [int] $num2){ $total=$num1+$num2 #Return $total $total } #Calling the function [int] $num1=3 [int] $num2=4 $d= add $num1 $num2 Write-Host $d 3. If you like to return anything from the function, you just need to type in the object you like to return, not need to type return .e.g. $ObjectToReturn not return $ObjectToReturn

    Read the article

  • Configuring Team Foundation Server Basic on Home Server.

    - by Enrique Lima
    For the installation I selected only the Team Foundation Server role. Then, I opened the Team Foundation Server Administration Console (which I think is a great addition and improvement over the way TFS was configured in the past) to proceed with the configuration of the pieces. Once I selected the Configure Installed Features, the Configuration Center opened up. Now, the choices … In my implementation here I just want to take advantage of Source Control primarily.  I want to be able to store my code and projects.  So, Basic it is! So, the Basic Configuration Wizard opens up.  Now the options to configure are very limited, but we have to provide details for the SQL Server Instance. And now, to select Install SQL Server express.  If you want to take advantage of another system in your environment to host your database, well you could Use an existing SQL Server Instance. Once it has the details it needs, you get a Summary view to confirm your choices. Once, you click next or verify, it runs readiness checks on your system to make sure the installation will have a successful pass.  And we love GREEN! Now, since got the green flag, our next stop is to let the wizard do its magic, click on Configure.  And once again, we love GREEN! We click Next, and … We like a big Green Success sign … We close the Configuration Center … First results … Web Access …  Nothing to show … but we are there! And all this running from a Microsoft Home Server installation.

    Read the article

  • Best peer-to-peer game architecture

    - by Dejw
    Consider a setup where game clients: have quite small computing resources (mobile devices, smartphones) are all connected to a common router (LAN, hotspot etc) The users want to play a multiplayer game, without an external server. One solution is to host an authoritative server on one phone, which in this case would be also a client. Considering point 1 this solution is not acceptable, since the phone's computing resources are not sufficient. So, I want to design a peer-to-peer architecture that will distribute the game's simulation load among the clients. Because of point 2 the system needn't be complex with regards to optimization; the latency will be very low. Each client can be an authoritative source of data about himself and his immediate environment (for example bullets.) What would be the best approach to designing such an architecture? Are there any known examples of such a LAN-level peer-to-peer protocol? Notes: Some of the problems are addressed here, but the concepts listed there are too high-level for me. Security I know that not having one authoritative server is a security issue, but it is not relevant in this case as I'm willing to trust the clients. Edit: I forgot to mention: it will be a rather fast-paced game (a shooter). Also, I have already read about networking architectures at Gaffer on Games.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Sangam 13: Hyderabad, India

    - by mvaughan
    by Teena Singh, Oracle Applications User Experience The AIOUG (All India Oracle User Group) will be hosting Sangam 13 November 8th and 9th in Hyderabad, India. The first Sangam conference was in 2009 and the AppsUX team has been involved with the conference and user group membership since 2011. We are excited to be returning to the conference and meeting Oracle end users there. For the first time at Sangam the AppsUX team will host an Onsite Usability Lab at the conference. If you or one of your team members is attending the conference and interested in attending a pre-scheduled one on one usability session, contact [email protected]. In addition to pre-scheduled sessions in the Onsite Usability Lab, our team will also be hosting Walk In studies.  Whether you have 5 minutes, 15 minutes, or half an hour, you can experience a one on one demo learn more about how user testing is conducted with a UX expert. Additionally, you can learn how you and your company can participate in future design and user research activities. The AppsUX team will also be available at the Oracle booth in the Demo area if you want to ask questions. Finally, you can learn how simplicity, consistency, and emerging trends are driving the applications user experience strategy at Oracle when you attend Thomas Wolfmaier's (Director of SCM User Experience, Oracle) presentation on: Applications User Experiences In the Cloud: Trends and Strategy,  November 8th, 2013. For further information on our team’s involvement in the conference, please refer to the events page on Usable Apps here.

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >