Search Results

Search found 17710 results on 709 pages for 'portable home directories'.

Page 131/709 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • htaccess with wildcard SSL

    - by Ericko
    We have a Wildcard SSL Certificate that is supposed to work on any subdomain of a given domain. So in this server we have this file structure: /home/DOMAIN/public_html/subdomainx /home/DOMAIN/public_html/subdomainy etc... Now, the Certificate is installed, but when you visit any subdomain over https (example: hxxps://subdomainx.domain.com ) it points to /home/DOMAIN/public_html/index.php We need that when you visit a subdomain via https hxxps://subdomainx.domain.com That it points to the the same directory that it's http equivalent: /home/DOMAIN/public_html/subdomainx Our provider tells us that this is not possible, that the current behaviour is correct, and that to achieve this we need to do it with htaccess. I've tried a few things, incluiding this solution, that seems to be what I need: http://stackoverflow.com/questions/5365612/advice-on-configuring-htaccess-file-to-redirect-http-subdomain-to-https-equival But can't get it to work. Any tips? Thanks. Added: The server is Apache.

    Read the article

  • Linux SFTP and many local user accounts, limits with mount --bind?

    - by user123428
    I am in the process of building a solution to handle many developers (possibly hundreds) to work on their files via sftp, each one Jailed in their home directory. For our particular needs, we have a samba mount point that contains all of the users home directories. I have started developing the following solution and hit some walls: - I have configured a Ubuntu Lucid Server as sftp server. - In order to jail the user in their home directory (without allowing them the browse a directory up and seeing all the other users folders) I am using mount --bind and not a symbolic link (also some ftp clients don't really work with sym links). - The user accounts are local unix user accounts on the sftp server (not using a directory service or anything) that have an empty home folder created on the local machine, then I use mount --bind to bind the empty folder to the actual users home directory on the samba share. With this solution I am hitting a couple of problems, in the case of a server reboot, all the mount --binds are lost because they are not written in fstab. Then I have read somewhere that the maximum amount of entries in fstab are 400 (which does not really help us). I have thought of a solution of writing something that stores the mounts in a text file as a backup and on server reboot, run the script that re mounts. I am just really unsure about this whole process and was wondering if anyone has any insight on possibly a better solution for SFTP? (not FTP)

    Read the article

  • Rsync with a list of variables

    - by EMKA
    I am trying to write a bash script that will rsync only a specific subset of folders. I am trying to figure out a more slick so that I can just add a variables such as FOLDER1='name of folder in home directory' and then rsync -arvz --delete /home/emka/$FOLDER1/ /home/emka/Desktop/Mount/$FOLDER1 Currently I have FOLDER1 through FOLDER13, but I do not want to have the above line thirteen times. Could someone give me a push on how to do this?

    Read the article

  • Multiple Skype Accounts in Windows 8

    - by Liath
    We use Skype at work and I use Skype at home, however I have two accounts so clients can't call me at home and I'm not distracted when I'm at work. I've just upgraded my laptop from Windows 7 to Windows 8, in 7 I'd sign into one then sign out. With Windows 8 you have to tie it to your Windows Account. How can I maintain this distinction and still use my laptop for Skype calls when I work at home?

    Read the article

  • Why Remote Desktop Sessions show client internal IP address? [closed]

    - by Varp
    I have Windows Server 2008 r2 with static ip address on WAN interface. I connecting to the server from home from my laptop. Laptop at home is behind nat box. When i connecting to the server from home in Remote Desktop Session Manager i see in client status dialog a local ip address of client behind the nat box not WAN ip address of nat box. I suppose i must see the WAN ip address of the nat box in Remote Desktop Session Manager, isnt it?

    Read the article

  • Building a Student Storage server

    - by DobotJr
    I work for a school district. I've been put in charge of building a storage server for students. A place for them to work off of from school and home. My challenge is getting this to work from home. At school they login, authenticate, and they get a mapped drive to their folder on the server (S:\fileserver\studentname). My question is how can I make this available to students at home? The server is running Windows Server 2003 R1. I've got PHP, Apache, and MySQL working together. My idea is to write a script that will "crawl" through the directory containing all of the student folders, then create an instance of every file and folder in a MySQL DB. Create a login page that will use LDAP for authentication, and once they login to the server from home, they get a page with folders a files tied to their username. Has anyone out there ever put something like this together??

    Read the article

  • Why doesn't my cron.d per minute job run?

    - by Travis Griggs
    I have thrown a bunch of darts trying to get a python script of mine to execute every minute. So I thought I'd simplify it to just do the "simplest thing that could could possibly work" once per minute (I'm running debian/testing). I created a single line file in /etc/cron.d/perminute: * * * * * /bin/touch /home/me/ding_dong It's owned by root, and executable (not sure if either of those matter). And then I did: sudo service cron reload And then sit back and start running ls -ltr again and again in my home directory (/home/me). But my ding_dong file never shows up. I know if I do a sudo /bin/touch /home/me/ding_dong, it shows up right away. Obviously missing something stupid here.

    Read the article

  • I am trying to deploy my first rails app using Capistrano and am getting an error.

    - by Andrew Bucknell
    My deployment of a rails app with capistrano is failing and I hoping someone can provide me with pointers to troubleshoot. The following is the command output andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:setup * executing `deploy:setup' * executing "mkdir -p /var/www/dev/guestbook2 /var/www/dev/guestbook2/releases /var/www/dev/guestbook2/shared /var/www/dev/guestbook2/shared/system /var/www/dev/guestbook2/shared/log /var/www/dev/guestbook2/shared/pids && chmod g+w /var/www/dev/guestbook2 /var/www/dev/guestbook2/releases /var/www/dev/guestbook2/shared /var/www/dev/guestbook2/shared/system /var/www/dev/guestbook2/shared/log /var/www/dev/guestbook2/shared/pids" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command command finished andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:check * executing `deploy:check' * executing "test -d /var/www/dev/guestbook2/releases" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2/releases" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "which git" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2/shared" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished You appear to have all necessary dependencies installed andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:migrations * executing `deploy:migrations' * executing `deploy:update_code' updating the cached checkout on all servers executing locally: "git ls-remote [email protected]:/home/andrew/git/guestbook2.git master" Enter passphrase for key '/home/andrew/.ssh/id_dsa': * executing "if [ -d /var/www/dev/guestbook2/shared/cached-copy ]; then cd /var/www/dev/guestbook2/shared/cached-copy && git fetch origin && git reset --hard 369c5e04aaf83ad77efbfba0141001ac90915029 && git clean -d -x -f; else git clone [email protected]:/home/andrew/git/guestbook2.git /var/www/dev/guestbook2/shared/cached-copy && cd /var/www/dev/guestbook2/shared/cached-copy && git checkout -b deploy 369c5e04aaf83ad77efbfba0141001ac90915029; fi" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command ** [dev.andrewbucknell.com :: err] Permission denied, please try again. ** Permission denied, please try again. ** Permission denied (publickey,password). ** [dev.andrewbucknell.com :: err] fatal: The remote end hung up unexpectedly ** [dev.andrewbucknell.com :: out] Initialized empty Git repository in /var/www/dev/guestbook2/shared/cached-copy/.git/ command finished failed: "sh -c 'if [ -d /var/www/dev/guestbook2/shared/cached-copy ]; then cd /var/www/dev/guestbook2/shared/cached-copy && git fetch origin && git reset --hard 369c5e04aaf83ad77efbfba0141001ac90915029 && git clean -d -x -f; else git clone [email protected]:/home/andrew/git/guestbook2.git /var/www/dev/guestbook2/shared/cached-copy && cd /var/www/dev/guestbook2/shared/cached-copy && git checkout -b deploy 369c5e04aaf83ad77efbfba0141001ac90915029; fi'" on dev.andrewbucknell.com andrew@melb-web:~/projects/rails/guestbook2$ The following fragment is from cap -d deploy:migrations Preparing to execute command: "find /var/www/dev/guestbook2/releases/20100305124415/public/images /var/www/dev/guestbook2/releases/20100305124415/public/stylesheets /var/www/dev/guestbook2/releases/20100305124415/public/javascripts -exec touch -t 201003051244.22 {} ';'; true" Execute ([Yes], No, Abort) ? |y| yes * executing `deploy:migrate' * executing "ls -x /var/www/dev/guestbook2/releases" Preparing to execute command: "ls -x /var/www/dev/guestbook2/releases" Execute ([Yes], No, Abort) ? |y| yes /usr/lib/ruby/gems/1.8/gems/capistrano-2.5.17/lib/capistrano/recipes/deploy.rb:55:in `join': can't convert nil into String (TypeError) from /usr/lib/ruby/gems/1.8/gems/capistrano-2.5.17/lib/capistrano/recipes/deploy.rb:55:in `load'

    Read the article

  • The backbone router isn't working properly

    - by user2473588
    I'm building a simple backbone app that have 4 routes: home, about, privacy and terms. But after setting the routes I have 3 problems: The "terms" view isn't rendering; When I refresh the #about or the #privacy page, the home view renders after the #about/#privacy view When I hit the back button the home view never renders. For example, if I'm in the #about page, and I hit the back button to the homepage, the about view stays in the page I don't know what I'm doing wrong about the 1st problem. I think that the 2nd and 3rd problem are related with something missing in the home router, but I don't know what is. Here is my code: HTML <section class="feed"> <script id="homeTemplate" type="text/template"> <div class="home"> </div> </script> <script id="termsTemplate" type="text/template"> <div class="terms"> Bla bla bla bla </div> </script> <script id="privacyTemplate" type="text/template"> <div class="privacy"> Bla bla bla bla </div> </script> <script id="aboutTemplate" type="text/template"> <div class="about"> Bla bla bla bla </div> </script> </section> The views app.HomeListView = Backbone.View.extend({ el: '.feed', initialize: function ( initialbooks ) { this.collection = new app.BookList (initialbooks); this.render(); }, render: function() { this.collection.each(function( item ){ this.renderHome( item ); }, this); }, renderHome: function ( item ) { var bookview = new app.BookView ({ model: item }) this.$el.append( bookview.render().el ); } }); app.BookView = Backbone.View.extend ({ tagName: 'div', className: 'home', template: _.template( $( '#homeTemplate' ).html()), render: function() { this.$el.html(this.template(this.model.toJSON())); return this; } }); app.AboutView = Backbone.View.extend({ tagName: 'div', className: 'about', initialize:function () { this.render(); }, template: _.template( $( '#aboutTemplate' ).html()), render: function () { this.$el.html(this.template()); return this; } }); app.PrivacyView = Backbone.View.extend ({ tagName: 'div', className: 'privacy', initialize: function() { this.render(); }, template: _.template( $('#privacyTemplate').html() ), render: function () { this.$el.html(this.template()); return this; } }); app.TermsView = Backbone.View.extend ({ tagName: 'div', className: 'terms', initialize: function () { this.render(); }, template: _.template ( $( '#termsTemplate' ).html() ), render: function () { this.$el.html(this.template()), return this; } }); And the router: var AppRouter = Backbone.Router.extend({ routes: { '' : 'home', 'about' : 'about', 'privacy' : 'privacy', 'terms' : 'terms' }, home: function () { if (!this.homeListView) { this.homeListView = new app.HomeListView(); }; }, about: function () { if (!this.aboutView) { this.aboutView = new app.AboutView(); }; $('.feed').html(this.aboutView.el); }, privacy: function () { if (!this.privacyView) { this.privacyView = new app.PrivacyView(); }; $('.feed').html(this.privacyView.el); }, terms: function () { if (!this.termsView) { this.termsView = new app.TermsView(); }; $('.feed').html(this.termsView.el); } }) app.Router = new AppRouter(); Backbone.history.start(); I'm missing something but I don't know what. Thanks

    Read the article

  • PHP Frontpage/Page controller

    - by atno
    I using the following as Frontpage/Page Controller(s) and it's working ok so far, except two problems I'm facing which as you can see are the $pages array and the switch, which are actually much much longer as the one I've pasted here. Everytime there is a need for a new page controller I have to add it to $pages array and to switch which makes that list very long. How would you overcome this problem and do you see any other improvement on this code? loadLogic() in page controllers it is used to get functions under pages/controllername/logic/function.php. Frontpage Controller - index.php: include 'common/common.php'; if(!isset($_GET['p']) OR $_GET['p'] == ''){ $_GET['p'] = 'home'; header('Location: index.php?p=home'); } $pages = array('home','register','login','logout','page1','page2','page3'); $_GET['p'] = trim($_GET['p']); if(isset($_GET['p'])){ if(in_array($_GET['p'], $pages)){ switch ($_GET['p']) { case 'home': include 'home.php'; break; case 'register': include 'register.php'; break; case 'login': include 'login.php'; break; case 'logout': include 'logout.php'; break; case 'page1': include 'page1.php'; break; case 'page2': include 'page2.php'; break; case 'page3': include 'page3.php'; break; } }else{ echo '404!'; } } Page Controller - {home,register,login,logout,page1,page2,page3}.php: include 'tpl/common/header.php'; contentStart(); if(isset($_SESSION['logged'])){ loadLogic('dashboard'); }else{ loadLogic('nologin'); } //Display login form in logic page instead links // if(!isset($_SESSION['logged'])){ contentEnd(); loadLogic('nologinForm'); }else{ contentEnd(); include'tpl/common/rcol.php'; } include 'tpl/common/footer.php'; function loadLogic(): function loadLogic($logic) { $path = dirname(__DIR__) . '/pages'; $controller = preg_split('/&/',$_SERVER['QUERY_STRING']); $controller = trim($controller[0],"p="); $logicPath = 'logic'; $logic = $logic . '.php'; $err = 0; $logicFullPath = $path.'/'.$controller.'/'.$logicPath.'/'.$logic; if($err == '0'){ include "$logicFullPath"; } } Folder Structure: projectName | ---> common | ---> pages | | | --->home | | | --->register | | | --->login | | | --->logout | | | --->page1 | | | --->page2 | | | --->page3 | ---> tpl | | | ---> common | --> home.php | --> register.php | --> login.php | --> logout.php | --> page1.php | --> page2.php | --> page3.php

    Read the article

  • Why does one of two identical Javascripts work in Firefox?

    - by Gigpacknaxe
    Hi, I have two image swap functions and one works in Firefox and the other does not. The swap functions are identical and both work fine in IE. Firefox does not even recognize the images as hyperlinks. I am very confused and I hope some one can shed some light on this for me. Thank you very much in advance for any and all help. FYI: the working script swaps by onClick via DIV elements and the non-working script swaps onMouseOver/Out via "a" elements. Remember both of these work just fine in IE. Joshua Working Javascript in FF: <script type="text/javascript"> var aryImages = new Array(); aryImages[1] = "/tires/images/mich_prim_mxv4_profile.jpg"; aryImages[2] = "/tires/images/mich_prim_mxv4_tread.jpg"; aryImages[3] = "/tires/images/mich_prim_mxv4_side.jpg"; for (i=0; i < aryImages.length; i++) { var preload = new Image(); preload.src = aryImages[i]; } function swap(imgIndex, imgTarget) { document[imgTarget].src = aryImages[imgIndex]; } <div id="image-container"> <div style="text-align: right">Click small images below to view larger.</div> <div class="thumb-box" onclick="swap(1, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_profile_thumb.jpg" width="75" height="75" /></div> <div class="thumb-box" onclick="swap(2, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_tread_thumb.jpg" width="75" height="75" /></div> <div class="thumb-box" onclick="swap(3, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_side_thumb.jpg" width="75" height="75" /></div> <div><img alt="" name="imgColor" src="/tires/images/mich_prim_mxv4_profile.jpg" /></div> <div><a href="mich-prim-102-large.php"><img src="/tires/images/super_view.jpg" border="0" /></a></div> Not Working in FF: <script type="text/javascript"> var aryImages = new Array(); aryImages[1] = "/images/home-on.jpg"; aryImages[2] = "/images/home-off.jpg"; aryImages[3] = "/images/services-on.jpg"; aryImages[4] = "/images/services-off.jpg"; aryImages[5] = "/images/contact_us-on.jpg"; aryImages[6] = "/images/contact_us-off.jpg"; aryImages[7] = "/images/about_us-on.jpg"; aryImages[8] = "/images/about_us-off.jpg"; aryImages[9] = "/images/career-on.jpg"; aryImages[10] = "/images/career-off.jpg"; for (i=0; i < aryImages.length; i++) { var preload = new Image(); preload.src = aryImages[i]; } function swap(imgIndex, imgTarget) { document[imgTarget].src = aryImages[imgIndex]; } <td> <a href="home.php" onMouseOver="swap(1, 'home')" onMouseOut="swap(2, 'home')"><img name="home" src="/images/home-off.jpg" alt="Home Button" border="0px" /></a> </td>

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • MVC Razor Engine For Beginners Part 1

    - by Humprey Cogay, C|EH, E|CSA
    I. What is MVC? a. http://www.asp.net/mvc/tutorials/older-versions/overview/asp-net-mvc-overview II. Software Requirements for this tutorial a. Visual Studio 2010/2012. You can get your free copy here Microsoft Visual Studio 2012 b. MVC Framework Option 1 - Install using a standalone installer http://www.microsoft.com/en-us/download/details.aspx?id=30683 Option 2 - Install using Web Platform Installer http://www.microsoft.com/web/handlers/webpi.ashx?command=getinstallerredirect&appid=MVC4VS2010_Loc III. Creating your first MVC4 Application a. On the Visual Studio click file new solution link b. Click Other Project Type>Visual Studio Solutions and on the templates window select blank solution and let us name our solution MVCPrimer. c. Now Click File>New and select Project d. Select Visual C#>Web> and select ASP.NET MVC 4 Web Application and Enter MyWebSite as Name e. Select Empty, Razor as view engine and uncheck Create a Unit test project f. You can now view a basic MVC 4 Application Structure on your solution explorer g. Now we will add our first controller by right clicking on the controllers folder on your solution explorer and select Add>Controller h. Change the name of the controller to HomeController and under the scaffolding options select Empty MVC Controller. i. You will now see a basic controller with an Index method that returns an ActionResult j. We will now add a new View Folder for our Home Controller. Right click on the views folder on your solution explorer and select Add> New Folder> and name this folder Home k. Add a new View by right clicking on Views>Home Folder and select Add View. l. Name the view Index, and select Razor(CSHTML) as View Engine, All checkbox should be unchecked for now and click add. m. Relationship between our HomeController and Home Views Sub Folder n. Add new HTML Contents to our newly created Index View o. Press F5 to run our MVC Application p. We will create our new model, Right click on the models folder of our solution explorer and select Add> Class. q. Let us name our class Customer r. Edit the Customer class with the following code s. Open the HomeController by double clickin HomeController of our Controllers folder and edit the HomeControllerusing System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc;   namespace MyWebSite.Controllers {     public class HomeController : Controller     {         //         // GET: /Home/           public ActionResult Index()         {             return View();         }           public ActionResult ListCustomers()         {             List<Models.Customer> customers = new List<Models.Customer>();               //Add First Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 1,                         CompanyName = "Volvo",                         ContactNo = "123-0123-0001",                         ContactPerson = "Gustav Larson",                         Description = "Volvo Car Corporation, or Volvo Personvagnar AB, is a Scandinavian automobile manufacturer founded in 1927"                     });                 //Add Second Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 2,                         CompanyName = "BMW",                         ContactNo = "999-9876-9898",                         ContactPerson = "Franz Josef Popp",                         Description = "Bayerische Motoren Werke AG,  (BMW; English: Bavarian Motor Works) is a " +                                       "German automobile, motorcycle and engine manufacturing company founded in 1917. "                     });                 //Add Third Customer to Our Collection             customers.Add(new Models.Customer()             {                 Id = 3,                 CompanyName = "Audi",                 ContactNo = "983-2222-1212",                 ContactPerson = "Karl Benz",                 Description = " is a multinational division of the German manufacturer Daimler AG,"             });               return View(customers);         }     } } t. Let us now create a view for this Class, But before continuing Press Ctrl + Shift + B to rebuild the solution, this will make the previously created model on the Model class drop down of the Add View Menu. Right click on the views>Home folder and select Add>View u. Let us name our View as ListCustomers, Select Razor(CSHTML) as View Engine, Put a check mark on Create a strongly-typed view, and select Customer (MyWebSite.Models) as model class. Slect List on the Scaffold Template and Click OK. v. Run the MVC Application by pressing F5, and on the address bar insert Home/ListCustomers, We should now see a web page similar below.   x. You can edit ListCustomers.CSHTML to remove and add HTML codes @model IEnumerable<MyWebSite.Models.Customer>   @{     Layout = null; }   <!DOCTYPE html>   <html> <head>     <meta name="viewport" content="width=device-width" />     <title>ListCustomers</title> </head> <body>     <h2>List of Customers</h2>     <table border="1">         <tr>             <th>                 @Html.DisplayNameFor(model => model.CompanyName)             </th>             <th>                 @Html.DisplayNameFor(model => model.Description)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactPerson)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactNo)             </th>         </tr>         @foreach (var item in Model) {         <tr>             <td>                 @Html.DisplayFor(modelItem => item.CompanyName)             </td>             <td>                 @Html.DisplayFor(modelItem => item.Description)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactPerson)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactNo)             </td>                   </tr>     }         </table> </body> </html> y. Press F5 to run the MVC Application   z. You will notice some @HTML.DisplayFor codes. These are called HTML Helpers you can read more about HTML Helpers on this site http://www.w3schools.com/aspnet/mvc_htmlhelpers.asp   That’s all. You now have your first MVC4 Razor Engine Web Application . . .

    Read the article

  • PHP Facebook Cronjob with offline access

    - by Mohamed Salem
    1:the code to greet the user, ask for his permission and store his session data so that we can use a cronjob with his session data afterwards. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "password"; $db_name = "databasename"; #go to line 85, the script actually starts there mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); #you have to create a database to store session values. #if you do not know what columns there should be look at line 76 to see column names. #make them all varchars # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => '121036530138', 'secret' => '9bbec378147064', 'cookie' => false,)); # Lets set up the permissions we need and set the login url in case we need it. $par['req_perms'] = "friends_about_me,friends_education_history,friends_likes, friends_interests,friends_location,friends_religion_politics, friends_work_history,publish_stream,friends_activities, friends_events, friends_hometown,friends_location ,user_interests,user_likes,user_events, user_about_me,user_status,user_work_history,read_requests, read_stream,offline_access,user_religion_politics,email,user_groups"; $loginUrl = $facebook->getLoginUrl($par); function save_session($session){ global $facebook; # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$session['uid']); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a stored session, but is it valid? echo " We have a session, but is it valid?"; try { $attachment = array('access_token' => $session_id[0]); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " our old session is not valid, let's delete saved invalid session data "; $res = mysql_query("delete from facebook_user WHERE uid =".$session['uid']); #save new good session #to see what is our session data: print_r($session); if (is_array($session)) { $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','" . $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } # this should never ever happen echo " Something is terribly wrong: Our old session was bad, and now we cannot get the new session"; return; } echo " Our old stored session is valid "; return $session_id[0]; } else { echo " no stored session, this means the user never subscribed to our application before. "; # let's store the session $session = $facebook->getSession(); if (is_array($session)) { # Yes we have a session! so lets store it! $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','". $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } } } #this is the first meaningful line of this script. $session = $facebook->getSession(); # Is the user already subscribed to our application? if ( is_null($session) ) { # no he is not #send him to permissions page header( "Location: $loginUrl" ); } else { #yes, he is already subscribed, or subscribed just now #in case he just subscribed now, save his session information $access_token=save_session($session); echo " everything is ok"; # write your code here to do something afterwards } ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/indexx.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Fatal error: Call to undefined method Facebook::getSession() in /home/content/28/9687528/html/ss/src/indexx.php on line 86 2:A cronjob template that reads the stored session of a user from database, uses his session data to work on his behalf, like reading status posts or publishing posts etc. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "pass"; $db_name = "database"; # Lets connect to the Database and set up the table $link = mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => 'appid', 'secret' => 'secret', 'cookie' => false, )); function get_check_session($uidCheck){ global $facebook; # This function basically checks for a stored session and if we have one it returns it # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$uidCheck); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a session # but, is it valid? try { $attachment = array('access_token' => $session_id[0],); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " User ".$uidCheck." removed the application, or there is some other access problem. "; # let's delete stored data $res = mysql_query("delete from facebook_user where WHERE uid =".$uidCheck); return; } return $session_id[0]; } else { # "no stored session"; echo " error:newsFeedcrontab.php No stored sessions. This should not have happened "; } } # get all users that have given us offline access $users = getUsers(); foreach($users as $user){ # now for each user, check if they are still subscribed to our application echo " Checking user".$user; $access_token=get_check_session($user); # If we've not got an access_token we actually need to login. # but in the crontab, we just log the error, there is no way we can find the user to give us permission here. if ( is_null($access_token) ) { echo " error: newsFeedcrontab.php There is no access token for the user ".$user." "; } else { #we are going to read the newsfeed of user. There are user's friends' posts in this newsfeed try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/home', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get feed of ".$user.$e; } #do something with the result here #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "home" link under connections #we can also read the home of user. Home is the wall of the user who has given us offline access. try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/feed', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get wall of ".$user.$e; } #do something with the result here # #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "feed" link under connections } } function getUsers(){ $sql = "SELECT distinct(uid) from facebook_user Where 1"; $result = mysql_query($sql); while($row = mysql_fetch_array($result)){ $rows [] = $row['uid']; } print_r($rows); return $rows; } mysql_close($link); ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/cron.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/content/28/9687528/html/ss/src/cron.php on line 110 Warning: Invalid argument supplied for foreach() in /home/content/28/9687528/html/ss/src/cron.php on line 64

    Read the article

  • Unable to make properly work the Ralink rt3090 wifi card on my Lenovo B575 with Kubuntu 12.04 64bit

    - by Sebastien
    I look and tried many solution from many thread but I still unable to make this wifi card work properly (very slow, unable to connect to some wifi spot, etc.). I tried to compile the driver from the ralink website but it doesn't work. Tried to blacklist many mod, withou any result. So here are some command results, hope their help you help me: lspci sebastien@sebastien-portable:~$ lspci 00:00.0 Host bridge: Advanced Micro Devices [AMD] Family 14h Processor Root Complex 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Wrestler [Radeon HD 6310] 00:01.1 Audio device: Advanced Micro Devices [AMD] nee ATI Wrestler HDMI Audio [Radeon HD 6250/6310] 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 42) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) (rev 40) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller (rev 40) 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge (rev 40) 00:14.5 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 00:15.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) 00:15.2 PCI bridge: Advanced Micro Devices [AMD] nee ATI SB900 PCI to PCI bridge (PCIE port 2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 0 (rev 43) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 1 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 2 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 3 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 4 00:18.5 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 6 00:18.6 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 5 00:18.7 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 7 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 03:00.0 Network controller: Ralink corp. RT3090 Wireless 802.11n 1T/1R PCIe lsmod sebastien@sebastien-portable:~$ lsmod Module Size Used by rt2800pci 18715 0 arc4 12529 2 rt2800lib 58925 1 rt2800pci crc_ccitt 12667 1 rt2800lib rt2x00pci 14577 1 rt2800pci rt2x00lib 55301 3 rt2800pci,rt2800lib,rt2x00pci mac80211 506816 3 rt2800lib,rt2x00pci,rt2x00lib cfg80211 205544 2 rt2x00lib,mac80211 eeprom_93cx6 12725 1 rt2800pci rt2860sta 864748 0 snd_hda_codec_conexant 62128 1 snd_hda_codec_hdmi 32474 1 uvcvideo 72627 0 rts5139 351143 0 snd_hda_intel 33773 4 videodev 98259 1 uvcvideo snd_hda_codec 127706 3 snd_hda_codec_conexant,snd_hda_codec_hdmi,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec psmouse 87692 0 v4l2_compat_ioctl32 17128 1 videodev serio_raw 13211 0 k10temp 13166 0 snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec sp5100_tco 13791 0 i2c_piix4 13301 0 snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi ideapad_laptop 18234 0 sparse_keymap 13890 1 ideapad_laptop rfcomm 47604 0 joydev 17693 0 snd_seq_midi_event 14899 1 snd_seq_midi bnep 18281 2 bluetooth 180104 10 rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 18 snd_hda_codec_conexant,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd mac_hid 13253 0 snd_page_alloc 18529 2 snd_hda_intel,snd_pcm lp 17799 0 parport 46562 3 parport_pc,ppdev,lp usbhid 47199 0 hid 99559 1 usbhid r8169 62099 0 radeon 804372 4 video 19596 0 wmi 19256 0 ttm 76949 1 radeon drm_kms_helper 46978 1 radeon drm 242038 6 radeon,ttm,drm_kms_helper i2c_algo_bit 13423 1 radeon iwconfig sebastien@sebastien-portable:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"4CE6763F0E0A" Mode:Managed Frequency:2.452 GHz Access Point: 4C:E6:76:3F:0E:0A Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-39 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:100 Missed beacon:0 eth0 no wireless extensions.

    Read the article

  • Educational, well-written FOSS projects to read, study or discuss

    - by Godot
    Before you say it: yes, this "question" has been asked other times. However, I could not fine many of such questions and not that easily, and those I found had similar results. What I'm trying to say that there are no comprehensive lists of well written Open Source projects, so I decided to set some requirements for the entries (one or possibly more): Idiomatic use of the language in which they are written The project should be lightweight. Not as in "a few kbs", as in "clean" and possibly following the UNIX philosophy, making an efficient use of resources and performing its duty and nothing more. No code bloat, most importantly. Projects like Firefox and GNOME wouldn't qualify, for example. Minimal reliance on external, non-standard libraries, with exceptions for some common FOSS libraries (curses, Xlib, OpenGL and possibly "usual suspects" like gtk+, webkit and Boost). Reliance on well-written libraries is welcome. No reliance on proprietary software - for obvious reasons (programs that rely on XNA, DirectX, Cocoa and similar, for example). Well-documented code is welcome. Include link to web interfaces to their repositories if possible. Here are some sample projects that often pop up in these threads: Operating Systems Plan 9 from Bell Labs: More or less, the official "sequel" to UNIX. Written in C by the same people who invented C! NetBSD: The most portable BSD implementation, written in C and also a good example of portable and organized code. Network and Databases Sqlite: Extremely lightweight and extremely efficient, one of the best pieces of C software I've seen. Count the lines yourself! Lighttpd: A small but pretty reliable web server written in C. Programming languages and VMs Lua: extremely lightweight multi-paradigm programming language. Written in C. Tiny C Compiler: Really tiny C compiler. Not really comparable to GCC or Clang but does its job. PyPy: A Python implementation written in Python. Pharo: OK, I admit it, I'm not really a Smalltalk expert but Pharo is a fork of Squeak and looked rather interesting. Stackless Python - An implementation of Python that doesn't rely on the C call stack - written in C (with some parts in Python) Games and 3D: Angband: One of the most accessible roguelike codebases around here, written in C. Ogre3D: Cross-platform 3D engine. Gets bloated if you don't skip the platform-specific implementation code, otherwise is a pretty solid example of good C++ OO. Simon Tatham's Portable Puzzle Collection: Title says it all. Other - dwm: Lightweight window manager. Written in C. Emulation and Reverse Engineering - Bochs: x86 emulator, written in C++ and tiny enough. - MAME: If you want to see C at one of its lowest levels, MAME is for you. May not be as clean as the other projects but it can teach you A LOT. Before you ask: I didn't mention Linux because it has become quite bloated in the last few years, Linus has also confirmed it. Nonetheless, it'd be a great educational read the same, even if for other reasons. Same for GCC. Feel free to edit or wikify my post. I hope you won't lock my question, I'm only trying to organize a little community effort for the good of all those people who want to enhance their coding skills.

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • Newbie: get access privilege

    - by Mellon
    I am newbie on Linux Ubuntu machine. I logged in to the Ubuntu with username: student. There are some directories only allowed root user to access, for example /var/lib/mysql ,(I know I can use sudo to access but it is not what I want). If I want to get the access privilege on those directories with student account, is it so that I can run the following command : chown student: PATH_TO_ROOT_USER_PRIVILEGED_DIR and after that, I can access that directory by using my own account ? am I right? If I am right, then will root user lose the access privilege because I changed it to student user? If I am wrong, please tell me the right solution. P.S. please don't concern on what I am going to do on /var/lib/mysql directory, that is only my example, as I mentioned above, I mean generally *for those directories which only have root privilege*, can I use chown to change access privilege and will root user then loose the access because of the change made by chown ? I just wanna know the effect of chown.

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • Symbolic directory link shared in domain

    - by Sabre
    We have a file server that is 2008R2 STD, it is a member server in a 2008 AD. I need to relocate some of the files and directories and would like to do it behind the scenes more or less without impacting the users. (Reason for this is that some of the files, due to recent software changes, HAVE to be located locally on one of the workstations, but they can be accessed by other applications remotely.) So symbolic links seem the panacea here, I moved a directory to another network share in the same domain (Windows 7 professional), created a symlink to it in the location it used to be in, named it the same thing, and to the local user it seems almost transparent. I.E. When logged into the desktop of the file server, I can go to the directory, open the link, it leaps to the other share as if it were local, exactly what would be expected. Then I tried it from another client computer (Windows 7 professional as well), went through the normal provisioning of R2R and L2R with fsutil... No joy. What I am getting is an access denied "Logon failure: Unknown username or bad password." using the same account that I log on locally to the file server with (Which happens to be the domain admin) So I cannot believe it is telling the truth, or... I assume it is not passing the credentials I am connecting to the first share all the way through the symlink. The end result is I want users on the domain to browser to share A, inside share A is a mixture of directories/files that reside there, and symlinks to directories/files on the second machine over the network in the same domain. Possible? Or am I misunderstanding how the symlink should work?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • SSLVerifyClient optional with location-based exceptions

    - by Ian Dunn
    I have a site that requires authentication in order to access certain directories, but not others. (The "directories" are really just rewrite rules that all pass through /index.php) In order to authenticate, the user can either login with a standard username/password, or submit a client-side x509 certificate. So, Apache's vhost conf looks something like this: SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient none SSLVerifyDepth 1 <LocationMatch "/(foo-one|foo-two|foo-three)"> SSLVerifyClient optional </LocationMatch> That works fine, but then large file uploads fail because of the behavior documented in bug 12355. The workaround for that is to set SSLVerifyClient require (or optional) as the default, so now the conf looks like this SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient optional SSLVerifyDepth 1 <LocationMatch "/(bar-one|bar-two|bar-three)"> SSLVerifyClient none </LocationMatch> That fixes the upload problem, but the SSLVerifyClient none doesn't work for bar-one, bar-two, etc. Those directories are still prompted to present a certificate. Additionally, I also need the root URL to accessible without the user being prompted for a certificate. I'm afraid that will cancel out the workaround, though.

    Read the article

  • IIS virtual location path

    - by Worp
    Sorry in advance, for this seems to be a very noobish question and should be easily fixable, yet I can't work it out: I am setting up windows authentication for a website running on IIS 8.5: The Yii framework of the website takes input like http://localhost/mywebsite/index.php/site/loginand processes it, using the "login" action of the "site" Controller. I flipped on URL Rewrite to have nicer URLs, leaving the URL with /mywebsite/site/login. Now I need to set up windows authentication for this location. Very specifically ONLY for this very exact location. Only the /site/login location of mywebsite needs to have authentication. Every other location needs to have anonymous. Since it is a "virtual location", I don't know how to do it. I can set up win-auth for files, directories, virtual directories, etc. but not for virtual locations that do not map to any file but only to a Controller/Action in Yii. The working counterpart in Apache is but i can't "translate this into IIS". I have read that IIS does have the "~" symbol but I just couldn't make it work yet. Could it be used to achieve authentication on a location basis? I have looked around virtual directories as well, which seem to simply be a kind of "symlink" to actual folders on the harddrive. Can they be used differently to "create a virtual folder in a location that doesn't really exist to manage its properties"? Help is much appreciated.

    Read the article

  • Unrelated Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >