Search Results

Search found 68536 results on 2742 pages for 'pst file'.

Page 614/2742 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • Host AngularJS (Html5Mode) in ASP.NET vNext

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/06/10/host-angularjs-html5mode-in-asp.net-vnext.aspxMicrosoft had announced ASP.NET vNext in BUILD and TechED recently and as a developer, I found that we can add features into one ASP.NET vNext application such as MVC, WebAPI, SignalR, etc.. Also it's cross platform which means I can host ASP.NET on Windows, Linux and OS X.   If you are following my blog you should knew that I'm currently working on a project which uses ASP.NET WebAPI, SignalR and AngularJS. Currently the AngularJS part is hosted by Express in Node.js while WebAPI and SignalR are hosted in ASP.NET. I was looking for a solution to host all of them in one platform so that my SignalR can utilize WebSocket. Currently AngularJS and SignalR are hosted in the same domain but different port so it has to use ServerSendEvent. It can be upgraded to WebSocket if I host both of them in the same port.   Host AngularJS in ASP.NET vNext Static File Middleware ASP.NET vNext utilizes middleware pattern to register feature it uses, which is very similar as Express in Node.js. Since AngularJS is a pure client side framework in theory what I need to do is to use ASP.NET vNext as a static file server. This is very easy as there's a build-in middleware shipped alone with ASP.NET vNext. Assuming I have "index.html" as below. 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="angular.js" /> 4: <script type="text/javascript" src="angular-ui-router.js" /> 5: <script type="text/javascript" src="app.js" /> 6: </head> 7: <body> 8: <h1>ASP.NET vNext with AngularJS</h1> 9: <div> 10: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 11: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 12: </div> 13: <div data-ui-view></div> 14: </body> 15: </html> And the AngularJS JavaScript file as below. Notices that I have two views which only contains one line literal indicates the view name. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15: }]); 16:  17: app.controller('View1Ctrl', function ($scope) { 18: }); 19:  20: app.controller('View2Ctrl', function ($scope) { 21: }); All AngularJS files are located in "app" folder and my ASP.NET vNext files are besides it. The "project.json" contains all dependencies I need to host static file server. 1: { 2: "dependencies": { 3: "Helios" : "0.1-alpha-*", 4: "Microsoft.AspNet.FileSystems": "0.1-alpha-*", 5: "Microsoft.AspNet.Http": "0.1-alpha-*", 6: "Microsoft.AspNet.StaticFiles": "0.1-alpha-*", 7: "Microsoft.AspNet.Hosting": "0.1-alpha-*", 8: "Microsoft.AspNet.Server.WebListener": "0.1-alpha-*" 9: }, 10: "commands": { 11: "web": "Microsoft.AspNet.Hosting server=Microsoft.AspNet.Server.WebListener server.urls=http://localhost:22222" 12: }, 13: "configurations" : { 14: "net45" : { 15: }, 16: "k10" : { 17: "System.Diagnostics.Contracts": "4.0.0.0", 18: "System.Security.Claims" : "0.1-alpha-*" 19: } 20: } 21: } Below is "Startup.cs" which is the entry file of my ASP.NET vNext. What I need to do is to let my application use FileServerMiddleware. 1: using System; 2: using Microsoft.AspNet.Builder; 3: using Microsoft.AspNet.FileSystems; 4: using Microsoft.AspNet.StaticFiles; 5:  6: namespace Shaun.AspNet.Plugins.AngularServer.Demo 7: { 8: public class Startup 9: { 10: public void Configure(IBuilder app) 11: { 12: app.UseFileServer(new FileServerOptions() { 13: EnableDirectoryBrowsing = true, 14: FileSystem = new PhysicalFileSystem(System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "app")) 15: }); 16: } 17: } 18: } Next, I need to create "NuGet.Config" file in the PARENT folder so that when I run "kpm restore" command later it can find ASP.NET vNext NuGet package successfully. 1: <?xml version="1.0" encoding="utf-8"?> 2: <configuration> 3: <packageSources> 4: <add key="AspNetVNext" value="https://www.myget.org/F/aspnetvnext/api/v2" /> 5: <add key="NuGet.org" value="https://nuget.org/api/v2/" /> 6: </packageSources> 7: <packageSourceCredentials> 8: <AspNetVNext> 9: <add key="Username" value="aspnetreadonly" /> 10: <add key="ClearTextPassword" value="4d8a2d9c-7b80-4162-9978-47e918c9658c" /> 11: </AspNetVNext> 12: </packageSourceCredentials> 13: </configuration> Now I need to run "kpm restore" to resolve all dependencies of my application. Finally, use "k web" to start the application which will be a static file server on "app" sub folder in the local 22222 port.   Support AngularJS Html5Mode AngularJS works well in previous demo. But you will note that there is a "#" in the browser address. This is because by default AngularJS adds "#" next to its entry page so ensure all request will be handled by this entry page. For example, in this case my entry page is "index.html", so when I clicked "View 1" in the page the address will be changed to "/#/view1" which means it still tell the web server I'm still looking for "index.html". This works, but makes the address looks ugly. Hence AngularJS introduces a feature called Html5Mode, which will get rid off the annoying "#" from the address bar. Below is the "app.js" with Html5Mode enabled, just one line of code. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: // enable html5mode 17: $locationProvider.html5Mode(true); 18: }]); 19:  20: app.controller('View1Ctrl', function ($scope) { 21: }); 22:  23: app.controller('View2Ctrl', function ($scope) { 24: }); Then let's went to the root path of our website and click "View 1" you will see there's no "#" in the address. But the problem is, if we hit F5 the browser will be turn to blank. This is because in this mode the browser told the web server I want static file named "view1" but there's no file on the server. So underlying our web server, which is built by ASP.NET vNext, responded 404. To fix this problem we need to create our own ASP.NET vNext middleware. What it needs to do is firstly try to respond the static file request with the default StaticFileMiddleware. If the response status code was 404 then change the request path value to the entry page and try again. 1: public class AngularServerMiddleware 2: { 3: private readonly AngularServerOptions _options; 4: private readonly RequestDelegate _next; 5: private readonly StaticFileMiddleware _innerMiddleware; 6:  7: public AngularServerMiddleware(RequestDelegate next, AngularServerOptions options) 8: { 9: _next = next; 10: _options = options; 11:  12: _innerMiddleware = new StaticFileMiddleware(next, options.FileServerOptions.StaticFileOptions); 13: } 14:  15: public async Task Invoke(HttpContext context) 16: { 17: // try to resolve the request with default static file middleware 18: await _innerMiddleware.Invoke(context); 19: Console.WriteLine(context.Request.Path + ": " + context.Response.StatusCode); 20: // route to root path if the status code is 404 21: // and need support angular html5mode 22: if (context.Response.StatusCode == 404 && _options.Html5Mode) 23: { 24: context.Request.Path = _options.EntryPath; 25: await _innerMiddleware.Invoke(context); 26: Console.WriteLine(">> " + context.Request.Path + ": " + context.Response.StatusCode); 27: } 28: } 29: } We need an option class where user can specify the host root path and the entry page path. 1: public class AngularServerOptions 2: { 3: public FileServerOptions FileServerOptions { get; set; } 4:  5: public PathString EntryPath { get; set; } 6:  7: public bool Html5Mode 8: { 9: get 10: { 11: return EntryPath.HasValue; 12: } 13: } 14:  15: public AngularServerOptions() 16: { 17: FileServerOptions = new FileServerOptions(); 18: EntryPath = PathString.Empty; 19: } 20: } We also need an extension method so that user can append this feature in "Startup.cs" easily. 1: public static class AngularServerExtension 2: { 3: public static IBuilder UseAngularServer(this IBuilder builder, string rootPath, string entryPath) 4: { 5: var options = new AngularServerOptions() 6: { 7: FileServerOptions = new FileServerOptions() 8: { 9: EnableDirectoryBrowsing = false, 10: FileSystem = new PhysicalFileSystem(System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, rootPath)) 11: }, 12: EntryPath = new PathString(entryPath) 13: }; 14:  15: builder.UseDefaultFiles(options.FileServerOptions.DefaultFilesOptions); 16:  17: return builder.Use(next => new AngularServerMiddleware(next, options).Invoke); 18: } 19: } Now with these classes ready we will change our "Startup.cs", use this middleware replace the default one, tell the server try to load "index.html" file if it cannot find resource. The code below is just for demo purpose. I just tried to load "index.html" in all cases once the StaticFileMiddleware returned 404. In fact we need to validation to make sure this is an AngularJS route request instead of a normal static file request. 1: using System; 2: using Microsoft.AspNet.Builder; 3: using Microsoft.AspNet.FileSystems; 4: using Microsoft.AspNet.StaticFiles; 5: using Shaun.AspNet.Plugins.AngularServer; 6:  7: namespace Shaun.AspNet.Plugins.AngularServer.Demo 8: { 9: public class Startup 10: { 11: public void Configure(IBuilder app) 12: { 13: app.UseAngularServer("app", "/index.html"); 14: } 15: } 16: } Now let's run "k web" again and try to refresh our browser and we can see the page loaded successfully. In the console window we can find the original request got 404 and we try to find "index.html" and return the correct result.   Summary In this post I introduced how to use ASP.NET vNext to host AngularJS application as a static file server. I also demonstrated how to extend ASP.NET vNext, so that it supports AngularJS Html5Mode. You can download the source code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Making Cisco WebEx work with 13.10 Saucy 64-bit

    - by Russ Lowenthal
    I was having a very hard time getting webex to work under Saucy. Up until now I've been able to just install a java plugin, install ia32-libs, and I was good to go. With Saucy ia32-libs is gone and it's up to us to figure out which 32-bit libraries we need to install. I struggled with this for a few days trying blindly to install this and that until I found a way to get exactly what I need. I got the clue I needed from this post: http://blogs.kde.org/2013/02/05/ot-how-get-webex-working-suse-linux-122-64bit#comment-9534 and for anyone who wants it, here is a step-by-step method to follow that works every time (so far) ***Install JDK and configure java plugin for browser. No need for a 32-bit JDK or Firefox ***Try to start a webex. This will create $HOME/.webex/1324/ ***Check those .so libraries for unresolved dependencies by running ldd against them. For example: ldd $HOME/.webex/1324/*.so >>check.txt Look in check.txt for anything that is not found. For example, I found: > libdbr.so: > linux-gate.so.1 => (0xf7742000) > libjawt.so => not found > libX11.so.6 => /usr/lib/i386-linux-gnu/libX11.so.6 (0xf75e6000) > libXmu.so.6 => not found > libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf75e0000)* ***Find what packages provide that file by installing apt-file with: sudo apt-get install apt-file apt-file update note: apt-file update will take a while, go get a cup of tea then locate which package contains your missing libraries with: apt-file search libXmu.so.6 apt-file search libjawt.so ***and fix it using: apt-get install -y libxmu6:i386 apt-get install -y libgcj12-awt:i386

    Read the article

  • PHP won't load php.ini

    - by Chuck
    I am racking my brain here and I must be doing something really stupid. I'm trying to setup PHP on Win2008 R2/IIS 7.5. I unpacked php538 into c:\php538 and renamed the php.ini-development to php.ini. Then i tried going to a command prompt and running: c:\php358\php -info I get: Configuration File (php.ini) Path => C:\windows Loaded Configuration File => (none)Scan this dir for additional .ini files => (none) Additional .ini files parsed => (none) Loaded Configuration File => (none) Scan this dir for additional .ini files => (none) Additional .ini files parsed => (none) I have tried using php5217. I tried putting php.ini in c:\windows. I tried creating the PHPRC envrionment variable and pointing it to c:\php358. Every time I have the same problem. PHP does not find or load the ini file. If I run: c:\php358\php -c php.ini -info Then it will load the file. But I shouldn't have to do this for PHP to find the file in the same directory, in the Windows directory, or using the environment variable, so I'm stumped. When I try to run PHP from IIS I get a 500 error and I can only assume at this time it is because it can't find and load the php.ini file correctly. I see similar questions on here, but none seem to address the problem I am having.

    Read the article

  • How to login as other in ubuntu 12.04

    - by murali
    i have upgraded my ubuntu 11.10 to 12.04. i could not see other login option in the login screen. it shows only Guest login and User login. The User Login ask only password and i had never entered in as User login so that i do not know about password of User login. my problem is how to login as root from the login screen? how can i get Other login option to login as root or some other user? before ask this question i have tried the following: try to add the greeter-show-manual-login=true line at the bottom of /etc/lightdm/lightdm.conf file as Guest login but i get access denied error. i do not know the password of User login (ask only password while login) to purpose of adding above line. from the safe mode login, i could login as root but i could not add the above line the lightdm.conf file . i got read only error so that i tried to change the permission to 777 like the following > chmod 777 lightdm.conf (i am within the /etc/lightdm/). but i got the error the file marked as/is read only in the file system. In 11.10 version i have created 4 users. i can see that the users exist in 12.10 . so i am sure my self users are not removed while upgrate. In short, i need Other login option on my login screen? how to get it? please help me. * Edited Question:* i have add the following line the /etc/lightdm/lighdm.conf file on recovery mode greeter-show-manual-login=true and i saved the file using wq command. now my /etc/lightdm/lighdm.conf file looking as the following: [SeatDefaults] greeter-session=unity-greeter user-session=ubuntu greeter-show-manual-login=true if i commit any mistake please correct me. by this problem i have wasted the two working day and all the works are in pending... please help me.

    Read the article

  • Can't upload project to PPA using Quickly

    - by RobinJ
    I can't get Quickly to upload my project into my PPA. I've set up my PGP key and used it so sign the code of conduct, and the PPA exists. I don't know what other usefull information I can supply. robin@RobinJ:~/Ubuntu One/Python/gtkreddit$ quickly share --ppa robinj/gtkredditGet Launchpad Settings Launchpad connection is ok gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe permissions on configuration file `/home/robin/.gnupg/gpg.conf' gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/robin/.gnupg/gpg.conf' Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 138, in <module> license.licensing() File "/usr/share/quickly/templates/ubuntu-application/license.py", line 284, in licensing {'translatable': 'yes'}) File "/usr/share/quickly/templates/ubuntu-application/internal/quicklyutils.py", line 166, in change_xml_elem xml_tree.find(parent_node).insert(0, new_node) AttributeError: 'NoneType' object has no attribute 'insert' ERROR: share command failed Aborting I reported this as a bug on Launchpad, because I assume that it is a bug. If you know a quick workaround, please let me know. https://bugs.launchpad.net/ubuntu/+source/quickly/+bug/1018138

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • Events and objects being skipped in GameMaker

    - by skeletalmonkey
    Update: Turns out it's not an issue with this code (or at least not entirely). Somehow the objects I use for keylogging and player automation (basic ai that plays the game) are being 'skipped' or not loaded about half the time. These are invisible objects in a room that have basic effects such are simulating button presses, or logging them. I don't know how to better explain this problem without putting up all my code, so unless someone has heard of this issue I guess I'll be banging my head against the desk for a bit /Update I've been continuing work on modifying Spelunky, but I've run into a pretty major issue with GameMaker, which I hope is me just doing something wrong. I have the code below, which is supposed to write log files named sequentially. It's placed in a End Room event such that when a player finishes a level, it'll write all their keypress's to file. The problem is that it randomly skips files, and when it reaches about 30 logs it stops creating any new files. var file_name; file_count = 4; file_name = file_find_first("logs/*.txt", 0); while (file_name != "") { file_count += 1; file_name = file_find_next(); } file_find_close(); file = file_text_open_write("logs/log" + string(file_count) + ".txt"); for(i = 0; i < ds_list_size(keyCodes); i += 1) { file_text_write_string(file, string(ds_list_find_value(keyCodes, i))); file_text_write_string(file, " "); file_text_write_string(file, string(ds_list_find_value(keyTimes, i))); file_text_writeln(file); } file_text_close(file); My best guess is that the first counting loop is taking too long and the whole thing is getting dropped? Also, if anyone can tell me of a better way to have sequentially numbered log files that would also be great. Log files have to continue counting over multiple start/stops of the game.

    Read the article

  • In Joomla In htaccess REQUEST_URI is always returning index.php instead of SEF URL

    - by Saumil
    I have installed joomsef version 3.9.9 with the Joomla 1.5.25. Now I want to set https for some of the section of my site(e.g URI starts with /events/) while wanting rest of all urls on http.I am setting rules in .htaccess file but not getting output as expected. I am checking REQUEST_URI of the SEF urls but always getting index.php as URI. Here is my htaccess code. ########## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ########## End - Custom redirects # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) # RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} (/[^.]*|\.(php|html?|feed|pdf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ########## End - Joomla! core SEF Section # Here is my code e.g site url is http://mydomain.com/events RewriteCond %{REQUEST_URI} ^/(events)$ RewriteCond %{HTTPS} !ON RewriteRule (.*) https://%{REQUEST_HOST}%{REQUEST_URI}/$1 [L,R=301] I am not getting why REQUEST_URI is reffering index.php even though my url in address bar is like this http://mydomain.com/events . I am using JOOMSEF(Joomla extension for SEF URLS).If I am removing other rules from the htaccess file then joomla stops working. I am not getting a way to handle this as I am not expert.Please let me know if someone has passed through same situation and have solution or suggest some work around. Thanks

    Read the article

  • How to create an Ubuntu 12.10 live CD?

    - by B Biswas
    I downloaded the Ubuntu 12.10 installer from Ubuntu website. However, I find that it is not an iso image and I am unable to create a live CD (or DVD) from it. I could not find any help from Ubuntu website as well as internet. Please help. PS - My OS is Windows XP. The Ubuntu installer I downloaded from Ubuntu website is a zip file. I unzipped the file and it has a wubi file. PS - Thanks. I could create a Live CD. 1) First I tried to do it in my laptop which has Win 7. It was showing the Ubuntu installer as a zip file and could not able to burn it in to a DVD. At that time I raised the question. 2) Later I copied the installer in my desktop which has Win XP. There the installer is shown as an ISO file and I burnt it in to a DVD and created the Live CD. This is working nicely in the the desktop. 3) I tried to run the Live Cd in my Laptop which is an AMD machine, the system does not boot up. 4) In my office desktop which has Win 7 the Ubuntu installer is showing as an ISO file. My questions are as follows: A) Why the Ubuntu installer file is showing differently in different machines? B) Why the Live CD is not working in my Laptop?

    Read the article

  • Using Sandcastle to build code contracts documentation

    - by DigiMortal
    In my last posting about code contracts I showed how code contracts are documented in XML-documents. In this posting I will show you how to get code contracts documented with Sandcastle and Sandcastle Help File Builder. Before we start, let’s download Sandcastle tools we need: Sandcastle Sandcastle Help File Builder Install Sandcastle first and then Sandcastle Help File Builder. Because we are generating only HTML based documentation we upload to server we don’t need any other tools. Of course, we need Cassini or IIS, but I expect it to be already there in your machine. Open your project and turn on XML-documentation for project and contracts. Now let’s run Sandcastle Help File Builder. We have to create new project and add our Visual Studio solution to this project. Now set the HelpFileFormat parameter value to be Website and let builder build the help. You have to wait about two or three minutes until help is ready. Take a look at your documentation that Sandcastle generated – you see not much information there about code contracts and their rules. Enabling code contracts documentation Now let’s include code contracts to documentation. Follow these steps: Open Sandcastle folder and make copy of vs2005 folder. Open CodeContracts folder (c:\program files\microsoft\contracts\) and unzip the archive from sandcastle folder. Copy all unzipped files to Sandcastle folder. Create (yes, create new) and build your Sandcastle Help File Builder documentation project again. Open help. In my case I see something like this now. As you can see then contracts are documented pretty well. We can easily turn on code contracts XML-documentation generation and all our contracts are documented automatically. To get documentation work we had to use Sandcastle help file fixes that are installed with code contracts and if we had previously Sandcastle Help File Builder project we had to create it from start to get new rules accepted. Once the documentation support for contracts works we have to do nothing more to get contracts documented.

    Read the article

  • debian/rules error "No rule to make target"

    - by Hairo
    i'm having some problems creating a .deb file with debuild before reading some tutorials i managed to make the file but i always get this error: make: *** No rule to make target «build». Stop. dpkg-buildpackage: failure: debian/rules build gave error exit status 2 debuild: fatal error at line 1329: dpkg-buildpackage -rfakeroot -D -us -uc -b failed Any help?? This is my debian rules file: #!/usr/bin/make -f # -*- makefile -*- # Sample debian/rules that uses debhelper. # This file was originally written by Joey Hess and Craig Small. # As a special exception, when this file is copied by dh-make into a # dh-make output file, you may use that output file without restriction. # This special exception was added by Craig Small in version 0.37 of dh-make. # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 build-stamp: configure-stamp dh_testdir touch build-stamp clean: dh_testdir dh_testroot rm -f build-stamp configure-stamp dh_clean install: build dh_testdir dh_testroot dh_clean -k dh_installdirs $(MAKE) install DESTDIR=$(CURDIR)/debian/pycounter mkdir -p $(CURDIR)/debian/pycounter # Copy .py files cp pycounter.py $(CURDIR)/debian/pycounter/opt/extras.ubuntu.com/pycounter/pycounter.py cp prefs.py $(CURDIR)/debian/pycounter/opt/extras.ubuntu.com/pycounter/prefs.py # desktop copyright and others (not complete, check) cp extras-pycounter.desktop $(CURDIR)/debian/pycounter/usr/share/applications/extras-pycounter.desktop

    Read the article

  • How I use RegExp in my Java program? [migrated]

    - by MIH1406
    I have the following string examples: 00001 1 12 123 00002 3 7 321 00003 99 23 332 00004 192 50 912 In a separate text file. Numbers are separated by tabs not spaces. I tried to read the file and print each line if it matches a given RegExp, but I could not find the suitable RegExp for these lines. private static void readFile() { String fileName = "processes.lst"; FileReader file = null; String result = ""; try { file = new FileReader(fileName); BufferedReader reader = new BufferedReader(file); String line = null; String regEx = "[0-9]\t[0-9]\t[0-9]\t[0-9]"; while((line = reader.readLine()) != null) { if(line.matches(regEx)) { result += "\n" + line; } } } catch(Exception e) { System.out.println(e.getMessage()); } finally { if(file != null) try { file.close(); } catch(Exception e) { System.out.println(e.getMessage()); } } System.out.println(result); } I ended up without any string being printed!!

    Read the article

  • Multiline Replacement With Visual Studio

    - by Alois Kraus
    I had to remove some file headers in a bigger project which were all of the form #region File Header /*[ Compilation unit ----------------------------------------------------------       Name            : Class1.cs       Language        : C#     Creation Date   :      Description     : -----------------------------------------------------------------------------*/ /*] END */ #endregion I know that would be a cool thing to write a simple C# program use a recursive file search, read all lines skip the first n lines and write the files back to disc. But I wanted to test things first before I ruin my source files with one little typo. There comes the Visual Studio Search and Replace in Files dialog into the game. I can test my regular expression to do a multiline match with the Find button before actually breaking anything. And if something goes wrong I have the Undo button.   There is a nice blog post from Paulo Morgado online who deals with Multiline Regular expressions. The Visual Studio Regular expressions are non standard so you have to adapt your usual Regex know how to the other patterns. The pattern I cam finally up with is \#region File Header:b*(.*\n)@\#endregion The Regular expression can be read as \#region File Header Match “#region File Header” \# Escapes the # character since it is a quantifier. :b* After this none or more spaces or tabs can follow (:b stands for space or tab) (.*\n)@ Match anything across lines in a non greedy way (the @ character makes it non greedy) to prevent matching too much until the #endregion somewhere in our source file. \#endregion Match everything until “#endregion” is found I had always knew that Visual Studio can do it but I never bothered to learn the non standard Regex syntax. This is powerful and it is inside Visual Studio since 2005!

    Read the article

  • How to update MDS ?

    - by harsh.singla
    The other question that comes to mind is how to update and existing file or creates a new file in MDS. It's really simple. We already have setup done for this in Foundation Pack. Here are the steps you need to do to update or create a new file in MDS 1) Browse to AIA_HOME/AIAMetaData 2) Place your file(it may be new file or modified existing file) at the desired location. The same hierarchy will appear in MDS starting from AIAMetaData. 3) Edit UpdateMetaDataDeploymentPlan.xml in aia_instances/AIA_INSTANCE_NAME/config. 4) In the tag, mention the files which you have modified or added new. You can add a file or directory as required. 5) Browse to AIA_HOME/Infrastructure/Install/scripts 6) Run ant -f UpdateMetaData.xml Here is the same UpdateMetaData Deployment Plan. UpdateMetaDataDP.xml There are some things that you may want to take care while updating MDS 1) The path in "fileset" tag can be anywhere on the disk. 2) The path in "include" tag has to be relative to what is given in "fileset" tag 3) The path is MDS will appear what you have given in "include" tag. The path in this tag will automatically appended to /apps/AIAMetaData/.

    Read the article

  • Software Center not Opening at all Error

    - by Newbie
    When I open software from menu, it says "cannot open software database. Please reinstall the software-center package. When I write software-center on terminal, such error comes: 2014-05-28 09:11:20,584 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2014-05-28 09:11:20,593 - softwarecenter.ui.gtk3.app - ERROR - xapian open failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 302, in __init__ if self.db.schema_version() != DB_SCHEMA_VERSION: File "/usr/share/software-center/softwarecenter/db/database.py", line 289, in schema_version return self.xapiandb.get_metadata("db-schema-version") File "/usr/share/software-center/softwarecenter/db/database.py", line 177, in xapiandb self._db_per_thread[thread_name] = self._get_new_xapiandb() File "/usr/share/software-center/softwarecenter/db/database.py", line 190, in _get_new_xapiandb xapiandb = xapian.Database(self._db_pathname) File "/usr/lib/python2.7/dist-packages/xapian/__init__.py", line 3667, in __init__ _xapian.Database_swiginit(self,_xapian.new_Database(*args)) DatabaseCorruptError: /var/cache/software-center/xapian/iamchert: Chert version file should be 28 bytes, actually 0 Now, when I write command sudo apt-get remove software-center dpkg: error: corrupt info database format file '/var/lib/dpkg/info/format' E: Sub-process /usr/bin/dpkg returned an error code (2) I had ubuntu before but it kind of got corrupted. Now, I have freshly reinstalled it and even at start, software center is not opening and this error comes. I hope you have a solution. Thanks.

    Read the article

  • Ways to dynamically render a real world 3d environment in Unity3D

    - by Jake M
    Using Unity3D and C# I am attempting to display a 3d version of a real world location. Inside my Unity3D app, the user will specify the GPS coordinates of a location, then my app will have to generate a 3d plane(anything doesn't have to be a plane) of that location. The plane will show a 500 metre by 500 metre 3d snapshot of that location. How would you suggest I achieve this in Unity3D? What methodology would you use to achieve this? NOTE: I understand that this is a very difficult endevour(to render real world locations dynamically in Unity3d) so I expect to perform many actions to achieve this. I just don't know of all the technologies out there and which would be best for my needs For example: Suggested methodology 1: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Convert that file to a .3ds file using ??? third party converter(is there a converter that exists?) Import .3ds into Unity3D at runtime as a plane(is this possible)? Suggested methodology 2: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Parse .dae file using my own C# parser I will write(do you think its possible to write a .dae parser that can parse the .dae into an array of Vector3 that describe the height map of that location?) Dynamically create a plane in Unity3D and populate it with my array/list of Vector3 points(is it possible to create a plane this way?) Maybe I am meant to create a mesh instead of a plane? Can you think of any other ways I could render a real world 3d environment in Unity3D?

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Quickly ubuntu-application + indicator template don't work

    - by aliasbody
    I've started to work with quickly and python (because I wanted to have some GTk3 integration and create and appindicator), and so I create the projecto like this : quickly create ubuntu-application ualarm cd ualarm quickly run And the application launched. But then I tried to add the appindicator like this : quickly add indicator And since then the application doesn't start anymore and this error appear : aliasbody@BodyUbuntu-PC:~/Projectos/ualarm$ quickly run (ualarm:8515): Gtk-WARNING **: Theme parsing error: gnome-panel.css:28:11: Not using units is deprecated. Assuming 'px'. /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) Traceback (most recent call last): File "bin/ualarm", line 33, in <module> ualarm.main() File "/home/aliasbody/Projectos/ualarm/ualarm/__init__.py", line 33, in main window = UalarmWindow.UalarmWindow() File "/home/aliasbody/Projectos/ualarm/ualarm_lib/Window.py", line 35, in __new__ new_object.finish_initializing(builder) File "/home/aliasbody/Projectos/ualarm/ualarm/UalarmWindow.py", line 24, in finish_initializing super(UalarmWindow, self).finish_initializing(builder) File "/home/aliasbody/Projectos/ualarm/ualarm_lib/Window.py", line 75, in finish_initializing self.indicator = indicator.new_application_indicator(self) File "/home/aliasbody/Projectos/ualarm/ualarm/indicator.py", line 52, in new_application_indicator ind = Indicator(window) File "/home/aliasbody/Projectos/ualarm/ualarm/indicator.py", line 20, in __init__ self.indicator = AppIndicator3.Indicator('ualarm', '', AppIndicator3.IndicatorCategory.APPLICATION_STATUS) TypeError: GObject.__init__() takes exactly 0 arguments (3 given) How can I solve this problem ?

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Dark NetBeans

    - by Geertjan
    Let's make NetBeans IDE look like this. Not saying it's a nice color or anything, just that it's possible to do so: I changed the coloring in the Java editor by going to Tools | Options, then chose "Fonts & Colors", then selected the "Norway Today" profile and changed the background setting to Dark Gray. Next, I put this themes.xml file into the "config" folder of the NetBeans IDE user directory, which you can identify as such by going to Help | About in the IDE. Go to the exact location defined by "User directory" in Help | About, and then go to the "config" folder within that folder: The "config" folder of the user directory is the readable/writable root of the NetBeans IDE virtual filesystem. If a themes.xml file is found there, it is used, as described here. Then, in netbeans.conf file, which is not in the NetBeans user directory but in the NetBeans installation directory, within its "etc" folder, I added the following to "netbeans_default_options": -J-Dnetbeans.useTheme=true --laf Metal The first of these enables usage of the themes.xml file, i.e., it notifies NetBeans IDE at startup to load the themes.xml file and to apply the content to the relevant UI components, while the second is needed because most/all of the themes only work if you're using the Metal Look and Feel. Note: I must add that in most cases, whatever it is you're trying to achieve via a themes.xml file can probably be achieved in a different, and better, way. The themes.xml mechanism has been there forever, but is not actively supported or tested, though it may work for the specific thing you're trying to do anyway. For example, if you're trying to change the background color of a TopComponent, use the paintComponent method of the TopComponent instead of using a themes.xml file.

    Read the article

  • Using mod_rewrite for a Virtual Filesystem vs. Real Filesystem

    - by philtune
    I started working in a department that uses a CMS in which the entire "filesystem" is like this: create a named file or folder - this file is given a unique node (ex. 2345) as well as a default "filename" (ex. /WelcomeToOurProductsPage) and apply a template assign one or more aliases to the file for a URL redirect (ex. /home-page-products - can also be accessed by /home-page-products.aspx) A new Rewrite command is written on the .htaccess file for each and every alias Server accesses either /WelcomeToOurProductsPage or /home-page-products and redirects to something like /template.aspx?tmp=2&node=2345 (here I'm guessing what it does - I only have front-end access for now - but I have enough clues to strongly assume) Node 2345 grabs content stored in a SQL Db and applies it to the template. Note: There are no actual files being created on the filesystem. It's entirely virtual. This is probably a very common thing, but since I have never run across this kind of system before two months ago, I wanted to explain it in case it isn't common. I'm not a fan at all of ASP or closed-sourced systems, so it may be that this is common practice for ASP developers. My question, that has taken far too long to ask, is: what are the benefits of this kind of system, as opposed to creating an actual file hierarchy? Are there any drawbacks to having every single file server call redirected? To having the .htaccess file hold rewrite rules for every single alias?

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >