Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 470/1620 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • My IDE is showing "undeclared FileNotFoundException must be caught or thrown"

    - by Dan Czarnecki
    I am having the following issue above. I have tried actually putting a try-catch statement into the code as you will see below, but I can't get the compiler to get past that. import java.io.*; public class DirectoryStatistics extends DirectorySize { /* Dan Czarnecki October 24, 2013 Class variables: private File directory A File object that holds the pathname of the directory to look in private long sizeInBytes A variable of type long that holds the size of a file/directory (in bytes) private long fileCount A variable of type long that holds the number of files in a directory Constructors: public DirectoryStatistics(File startingDirectory) throws FileNotFoundException Creates a DirectoryStatistics object, given a pathname (inherited from DirectorySize class), and has 3 instance variables that hold the directory to search in, the size of each file (in bytes), and the number of files within the directory Modification history: October 24, 2013 Original version of class */ private File directory; private long sizeInBytes; private long fileCount; public DirectoryStatistics(File startingDirectory) throws FileNotFoundException { super(startingDirectory); try { if(directory == null) { throw new IllegalArgumentException("null input"); } if(directory.isDirectory() == false) { throw new FileNotFoundException("the following input is not a directory!"); } } catch(IOException ioe) { System.out.println("You have not entered a directory. Please try again."); } } public File getDirectory() { return this.directory; } public long getSizeInBytes() { return this.sizeInBytes; } public long getFileCount() { return this.fileCount; } public long setFileCount(long size) { fileCount = size; return size; } public long setSizeInBytes(long size) { sizeInBytes = size; return size; } public void incrementFileCount() { fileCount = fileCount + 1; } public void addToSizeInBytes(long addend) { sizeInBytes = sizeInBytes + addend; } public String toString() { return "Directory" + this.directory + "Size (in bytes) " + this.sizeInBytes + "Number of files: " + this.fileCount; } public int hashCode() { return this.directory.hashCode(); } public boolean equals(DirectoryStatistics other) { return this.equals(other); } } import java.io.*; import java.util.*; public class DirectorySize extends DirectoryProcessor { /* Dan Czarnecki October 17, 2013 Class variables: private Vector<Long> directorySizeList Variable of type Vector<Long> that holds the total file size of files in that directory as well as files within folders of that directory private Vector<File> currentFile Variable of type Vector<File> that holds the parent directory Constructors: public DirectorySize(File startingDirectory) throws FileNotFoundException Creates a DirectorySize object, takes in a pathname (inherited from DirectoryProcessor class, and has a single vector of a DirectoryStatistics object to hold the files and folders within a directory Modification History October 17, 2013 Original version of class Implemented run() and processFile() methods */ private Vector<DirectoryStatistics> directory; /* private Vector<Long> directorySizeList; private Vector<File> currentFile; */ public DirectorySize(File startingDirectory) throws FileNotFoundException { super(startingDirectory); directory = new Vector<DirectoryStatistics>(); } public void processFile(File file) { DirectoryStatistics parent; int index; File parentFile; System.out.println(file.getName()); System.out.println(file.getParent()); parentFile = file.getParentFile(); parent = new DirectoryStatistics(parentFile); System.out.println(parent); parent.equals(parent); index = directory.indexOf(parent); if(index == 0) { directory.elementAt(index).addToSizeInBytes(file.length()); directory.elementAt(index).incrementFileCount(); } if(index < 0) { directory.addElement(parent); directory.lastElement().setSizeInBytes(file.length()); directory.lastElement().incrementFileCount(); } Could someone tell me why I'm getting this issue?

    Read the article

  • question about qsort in c++

    - by davit-datuashvili
    i have following code in c++ #include <iostream> using namespace std; void qsort5(int a[],int n){ int i; int j; if (n<=1) return; for (i=1;i<n;i++) j=0; if (a[i]<a[0]) swap(++j,i,a); swap(0,j,a); qsort5(a,j); qsort(a+j+1,n-j-1); } int main() { return 0; } void swap(int i,int j,int a[]) { int t=a[i]; a[i]=a[j]; a[j]=t; } i have problem 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(std::basic_string<_Elem,_Traits,_Alloc> &,std::basic_string<_Elem,_Traits,_Alloc> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\xstring(2203) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(std::pair<_Ty1,_Ty2> &,std::pair<_Ty1,_Ty2> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(76) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(_Ty &,_Ty &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(16) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(std::basic_string<_Elem,_Traits,_Alloc> &,std::basic_string<_Elem,_Traits,_Alloc> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\xstring(2203) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(std::pair<_Ty1,_Ty2> &,std::pair<_Ty1,_Ty2> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(76) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(_Ty &,_Ty &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(16) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(16) : error C2661: 'qsort' : no overloaded function takes 2 arguments 1>Build log was saved at "file://c:\Users\dato\Documents\Visual Studio 2008\Projects\qsort5\qsort5\Debug\BuildLog.htm" please help

    Read the article

  • Clearcase - selective merge.

    - by Keshav
    Hi, I have a peculiar Clearcase doubt. I cannot fully describe why I'm doing such a confusing architecture, but I need to do it (thanks to the mistake done by someone long back). Ok, here's a bit of detail: B1 is a contaminated branch where both my group's changes and another group's changes got mixed together so badly that there is no way of finding which code is whose). So the solution proposed is to create a new branch called B2 (at the same level as B1) and put all the unmodified code of the other group on it (The way to do that would be to merge B1 with B2 and then go about removing all changes from it till it becomes original). Then create a CR branch on B1 and keep only my group's newly added files or modified files on that branch. Finally create an integration branch out of B2 and merge the changes from CR branch of B1 to integration branch of B2. So here is what I did: (The use case is where I have dir D where file a, b and c are there. My group ended up modifying file a while b and c are not modified at all). There is a branch B1 on which there are files a, b and c. There is another branch B2. A merge is done from B1 to B2. Now B2 also has a, b and c. At this point both branch B1 and B2 are same. Now I delete file a from branch B2 (rmname). Now B2 has b and c only. I put a label to this branch called Label1. This makes the code with label Label1 as the unmodified code from other group. Now I create a sub branch called CR1 from B1 and delete all the files that are there in B2 branch (i.e b and c) such that it contains only the modified code from original code on it. In my case it is file a. At this point branch B2 with label Label1 has files b and c (those are unmodified code) and branch CR1 coming off B1 has only a (that is modified by us). Now I create another branch called integration branch that comes off B2 Label1. And then I do a merge of CR branch on to that expecting that it will have all three files a, b and c for me. All I'd need to do is to do a version tree view and see who modified what. But the problem I face is that since I had done a rmname of file a on branch B2 earlier to putting Label. The merge does not really take the file a from CR branch. How to I get around that problem. I want to selectively merge. Is it possible? sorry if it is a bad design. I'm not really conversant with Clear case and have limited options and time to clear some one else's mess.

    Read the article

  • Question about my sorting algorithm in C++

    - by davit-datuashvili
    i have following code in c++ #include <iostream> using namespace std; void qsort5(int a[],int n){ int i; int j; if (n<=1) return; for (i=1;i<n;i++) j=0; if (a[i]<a[0]) swap(++j,i,a); swap(0,j,a); qsort5(a,j); qsort(a+j+1,n-j-1); } int main() { return 0; } void swap(int i,int j,int a[]) { int t=a[i]; a[i]=a[j]; a[j]=t; } i have problem 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(std::basic_string<_Elem,_Traits,_Alloc> &,std::basic_string<_Elem,_Traits,_Alloc> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\xstring(2203) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(std::pair<_Ty1,_Ty2> &,std::pair<_Ty1,_Ty2> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(76) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(_Ty &,_Ty &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(16) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(std::basic_string<_Elem,_Traits,_Alloc> &,std::basic_string<_Elem,_Traits,_Alloc> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\xstring(2203) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(std::pair<_Ty1,_Ty2> &,std::pair<_Ty1,_Ty2> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(76) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(_Ty &,_Ty &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(16) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(16) : error C2661: 'qsort' : no overloaded function takes 2 arguments 1>Build log was saved at "file://c:\Users\dato\Documents\Visual Studio 2008\Projects\qsort5\qsort5\Debug\BuildLog.htm" please help

    Read the article

  • .htaccess blocking images on some internal pages

    - by jethomas
    I'm doing some web design for a friend and I noticed that everywhere else on her site images will load fine except for the subdirectory I'm working in. I looked in her .htaccess file and sure enough it is setup to deny people from stealing her images. Fair Enough, except the pages i'm working on are in her domain and yet I still get the 403 error. I'm pasting the .htaccess contents below but I replaced the domain names with xyz, 123 and abc. So specifically the page I'm on (xyz.com/DesignGallery.asp) pulls images from (xyz.com/machform/data/form_1/files) and it results in a forbidden error. RewriteEngine on <Files 403.shtml> order allow,deny allow from all </Files> RewriteCond %{HTTP_REFERER} !^http://xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://xyz.com/machform/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://xyz.com/machform/data/form_1/files/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://abc.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://abc.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://abc.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://abc.xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://123.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://123.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://123.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://123.xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com/machform/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com/machform/$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com/machform/data/form_1/files/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.abc.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.abc.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.abc.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.abc.xyz.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.123.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.123.com$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.123.xyz.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.123.xyz.com$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp)$ - [F,NC] deny from 69.49.149.17 RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^vendors\.html$ "http\:\/\/www\.xyz\.com\/Design_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^vendors\.asp$ "http\:\/\/www\.xyz\.com\/Design_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^ArtGraphics\.html$ "http\:\/\/www\.xyz\.com\/Art_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^ArtGraphics\.asp$ "http\:\/\/www\.xyz\.com\/Art_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^Gear\.asp$ "http\:\/\/www\.xyz\.com\/Gear_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^Gear\.html$ "http\:\/\/www\.xyz\.com\/Gear_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^NewsletterSign\-Up\.html$ "http\:\/\/www\.xyz\.com\/Newsletter\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^NewsletterSign\-Up\.asp$ "http\:\/\/www\.xyz\.com\/Newsletter\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^KidzStuff\.html$ "http\:\/\/www\.xyz\.com\/KidzStuff1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^KidzStuff\.asp$ "http\:\/\/www\.xyz\.com\/KidzStuff1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^Vendors\.html$ "http\:\/\/www\.xyz\.com\/Design_Gallery_1\.htm" [R=301,L] RewriteCond %{HTTP_HOST} ^.*$ RewriteRule ^Vendors\.asp$ "http\:\/\/www\.xyz\.com\/Design_Gallery_1\.htm" [R=301,L]

    Read the article

  • OpenCV 2.4.2 on Matlab 2012b (Windows 7)

    - by Maik Xhani
    Hello i am trying to use OpenCV 2.4.2 in Matlab 2012b. I have tried those actions: downloaded OpenCV 2.4.2 used CMake on opencv folder using Visual Studio 10 and Visual Studio 10 Win64 compiler built Debug and Release version with Visual Studio first without any other option and then with D_SCL_SECURE=1 specified for every project changed Matlab's mexopts.bat and adding new lines refering to library and include (see bottom for mexopts.bat content) with Visual Studio 10 compiler tried to compile a simple file with a OpenCV library inclusion and all goes well. try to compile something that actually uses OpenCV commands and get errors. I used openmexopencv library and when tried to compile something i get this error cv.make mex -largeArrayDims -D_SECURE_SCL=1 -Iinclude -I"C:\OpenCV\build\include" -L"C:\OpenCV\build\x64\vc10\lib" -lopencv_calib3d242 -lopencv_contrib242 -lopencv_core242 -lopencv_features2d242 -lopencv_flann242 -lopencv_gpu242 -lopencv_haartraining_engine -lopencv_highgui242 -lopencv_imgproc242 -lopencv_legacy242 -lopencv_ml242 -lopencv_nonfree242 -lopencv_objdetect242 -lopencv_photo242 -lopencv_stitching242 -lopencv_ts242 -lopencv_video242 -lopencv_videostab242 src+cv\CamShift.cpp lib\MxArray.obj -output +cv\CamShift CamShift.cpp C:\Program Files\MATLAB\R2012b\extern\include\tmwtypes.h(821) : warning C4091: 'typedef ': ignorato a sinistra di 'wchar_t' quando non si dichiara alcuna variabile c:\program files\matlab\r2012b\extern\include\matrix.h(319) : error C4430: identificatore di tipo mancante, verr… utilizzato int. Nota: default-int non Š pi— supportato in C++ the content of my mexopts.bat is @echo off rem MSVC100OPTS.BAT rem rem Compile and link options used for building MEX-files rem using the Microsoft Visual C++ compiler version 10.0 rem rem $Revision: 1.1.6.4.2.1 $ $Date: 2012/07/12 13:53:59 $ rem Copyright 2007-2009 The MathWorks, Inc. rem rem StorageVersion: 1.0 rem C++keyFileName: MSVC100OPTS.BAT rem C++keyName: Microsoft Visual C++ 2010 rem C++keyManufacturer: Microsoft rem C++keyVersion: 10.0 rem C++keyLanguage: C++ rem C++keyLinkerName: Microsoft Visual C++ 2010 rem C++keyLinkerVersion: 10.0 rem rem ******************************************************************** rem General parameters rem ******************************************************************** set MATLAB=%MATLAB% set VSINSTALLDIR=c:\Program Files (x86)\Microsoft Visual Studio 10.0 set VCINSTALLDIR=%VSINSTALLDIR%\VC set OPENCVDIR=C:\OpenCV rem In this case, LINKERDIR is being used to specify the location of the SDK set LINKERDIR=c:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\ set PATH=%VCINSTALLDIR%\bin\amd64;%VCINSTALLDIR%\bin;%VCINSTALLDIR%\VCPackages;%VSINSTALLDIR%\Common7\IDE;%VSINSTALLDIR%\Common7\Tools;%LINKERDIR%\bin\x64;%LINKERDIR%\bin;%MATLAB_BIN%;%PATH% set INCLUDE=%OPENCVDIR%\build\include;%VCINSTALLDIR%\INCLUDE;%VCINSTALLDIR%\ATLMFC\INCLUDE;%LINKERDIR%\include;%INCLUDE% set LIB=%OPENCVDIR%\build\x64\vc10\lib;%VCINSTALLDIR%\LIB\amd64;%VCINSTALLDIR%\ATLMFC\LIB\amd64;%LINKERDIR%\lib\x64;%MATLAB%\extern\lib\win64;%LIB% set MW_TARGET_ARCH=win64 rem ******************************************************************** rem Compiler parameters rem ******************************************************************** set COMPILER=cl set COMPFLAGS=/c /GR /W3 /EHs /D_CRT_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_DEPRECATE /D_SECURE_SCL=0 /DMATLAB_MEX_FILE /nologo /MD set OPTIMFLAGS=/O2 /Oy- /DNDEBUG set DEBUGFLAGS=/Z7 set NAME_OBJECT=/Fo rem ******************************************************************** rem Linker parameters rem ******************************************************************** set LIBLOC=%MATLAB%\extern\lib\win64\microsoft set LINKER=link set LINKFLAGS=/dll /export:%ENTRYPOINT% /LIBPATH:"%LIBLOC%" libmx.lib libmex.lib libmat.lib /MACHINE:X64 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib opencv_calib3d242.lib opencv_contrib242.lib opencv_core242.lib opencv_features2d242.lib opencv_flann242.lib opencv_gpu242.lib opencv_haartraining_engine.lib opencv_imgproc242.lib opencv_highgui242.lib opencv_legacy242.lib opencv_ml242.lib opencv_nonfree242.lib opencv_objdetect242.lib opencv_photo242.lib opencv_stitching242.lib opencv_ts242.lib opencv_video242.lib opencv_videostab242.lib /nologo /manifest /incremental:NO /implib:"%LIB_NAME%.x" /MAP:"%OUTDIR%%MEX_NAME%%MEX_EXT%.map" set LINKOPTIMFLAGS= set LINKDEBUGFLAGS=/debug /PDB:"%OUTDIR%%MEX_NAME%%MEX_EXT%.pdb" set LINK_FILE= set LINK_LIB= set NAME_OUTPUT=/out:"%OUTDIR%%MEX_NAME%%MEX_EXT%" set RSP_FILE_INDICATOR=@ rem ******************************************************************** rem Resource compiler parameters rem ******************************************************************** set RC_COMPILER=rc /fo "%OUTDIR%mexversion.res" set RC_LINKER= set POSTLINK_CMDS=del "%LIB_NAME%.x" "%LIB_NAME%.exp" set POSTLINK_CMDS1=mt -outputresource:"%OUTDIR%%MEX_NAME%%MEX_EXT%;2" -manifest "%OUTDIR%%MEX_NAME%%MEX_EXT%.manifest" set POSTLINK_CMDS2=del "%OUTDIR%%MEX_NAME%%MEX_EXT%.manifest" set POSTLINK_CMDS3=del "%OUTDIR%%MEX_NAME%%MEX_EXT%.map"

    Read the article

  • MSBuild command-line error - Silverlight 4 SDK is not installed

    - by Ned
    My MSBuild command line is as follows: msbuild e:\code\myProject.csproj /p:Configuration=Debug /p:OutputPath=bin/Debug /p:Platform=x86 /p:PlatformTarget=x86 The project builds fine on my development machine in VS2010 but not with the command above. I am running Win 7 64 - Bit. I'm getting an error that says I don't have the Silverlight 4 SDK installed but I do. I"ve read some posts that you have to set the Platform=x86 but to no avail. Here is the error message in full: Microsoft (R) Build Engine Version 4.0.30319.1 [Microsoft .NET Framework, Version 4.0.30319.1] Copyright (C) Microsoft Corporation 2007. All rights reserved. Build started 6/8/2010 4:03:38 PM. Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010 .web.csproj" on node 1 (default targets). GenerateTargetFrameworkMonikerAttribute: Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output fi les are up-to-date with respect to the input files. CoreCompile: Skipping target "CoreCompile" because all output files are up-to-date with resp ect to the input files. CopyFilesToOutputDirectory: Copying file from "obj\Debug\MyProject.Web.dll" to "bin\Debug\MyProject.Web .dll". MyProject2010.web - E:\code\dashboards\MyProject2010\MyProject2010.Web \bin\Debug\MyProject.Web.dll Copying file from "obj\Debug\MyProject.Web.pdb" to "bin\Debug\MyProject.Web .pdb". Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010 .web.csproj" (1) is building "E:\code\dashboards\MyProject2010\MyProject20 10.Client\MyProject2010.Client.csproj" (2) on node 1 (GetXapOutputFile target( s)). C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Silverlight .Common.targets(104,9): error : The Silverlight 4 SDK is not installed. [E:\cod e\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Client.cspr oj] Done Building Project "E:\code\dashboards\MyProject2010\MyProject2010.Clie nt\MyProject2010.Client.csproj" (GetXapOutputFile target(s)) -- FAILED. Done Building Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\ MyProject2010.web.csproj" (default targets) -- FAILED. Build FAILED. "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010.web.csp roj" (default target) (1) - "E:\code\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Clie nt.csproj" (GetXapOutputFile target) (2) - (GetFrameworkPaths target) - C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Silverlig ht.Common.targets(104,9): error : The Silverlight 4 SDK is not installed. [E:\c ode\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Client.cs proj] 0 Warning(s) 1 Error(s) Time Elapsed 00:00:00.39 I appreciate anyone's help. Thanks.

    Read the article

  • Using embedded resources in Silverlight (4) - other cultures not being compiled

    - by Andrei Rinea
    I am having a bit of a hard time providing localized strings for the UI in a small Silverlight 4 application. Basically I've put a folder "Resources" and placed two resource files in it : Statuses.resx Statuses.ro.resx I do have an enum Statuses : public enum Statuses { None, Working } and a convertor : public class StatusToMessage : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (!Enum.IsDefined(typeof(Status), value)) { throw new ArgumentOutOfRangeException("value"); } var x = Statuses.None; return Statuses.ResourceManager.GetString(((Status)value).ToString(), Thread.CurrentThread.CurrentUICulture); } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } } in the view I have a textblock : <TextBlock Grid.Column="3" Text="{Binding Status, Converter={StaticResource StatusToMessage}}" /> Upon view rendering the converter is called but no matter what the Thread.CurrentThread.CurrentUICulture is set it always returns the default culture value. Upon further inspection I took apart the XAP resulted file, taken the resulted DLL file to Reflector and inspected the embedded resources. It only contains the default resource!! Going back to the two resource files I am now inspecting their properties : Build action : Embedded Resource Copy to output directory : Do not copy Custom tool : ResXFileCodeGenerator Custom tool namespace : [empty] Both resource (.resx) files have these settings. The .Designer.cs resulted files are as follows : Statuses.Designer.cs : //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.1 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace SilverlightApplication5.Resources { using System; /// <summary> /// A strongly-typed resource class, for looking up localized strings, etc. /// </summary> // This class was auto-generated by the StronglyTypedResourceBuilder // class via a tool like ResGen or Visual Studio. // To add or remove a member, edit your .ResX file then rerun ResGen // with the /str option, or rebuild your VS project. [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Resources.Tools.StronglyTypedResourceBuilder", "4.0.0.0")] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] internal class Statuses { // ... yadda-yadda Statuses.ro.Designer.cs [empty] I've taken both files and put them in a console application and they behave as expected in it, not like in this silverlight application. What is wrong?

    Read the article

  • Django FileField not saving to upload_to location

    - by Erik
    I have an Attachment model that has a FileField in a Django 1.4.1 app. This FileField has a callable upload_to parameter which, per the Django docs should be called when the form (and therefore the model) is saved. When I run FormTest below, the upload_to callable is never called and the file therefore does not appear in the location provided by the upload_to method. What am I doing wrong? Notice that in the passing tests in ModelTest (also below), the upload_to method works as expected. Test: from core.forms.attachments import AttachmentForm from django.test import TestCase import unittest from django.core.files.uploadedfile import SimpleUploadedFile from django.core.files.storage import default_storage def suite(): return unittest.TestSuite( [ unittest.TestLoader().loadTestsFromTestCase(FormTest), ] ) class FormTest(TestCase): def test_form_1(self): filename = 'filename' f = file(filename) data = {'name':'name',} file_data = {'attachment_file':SimpleUploadedFile(f.name,f.read()),} form = AttachmentForm(data=data,files=file_data) self.assertTrue(form.is_valid()) attachment = form.save() root_directory = 'attachments' upload_location = root_directory + '/' + attachment.directory + '/' + filename self.assertTrue(attachment.attachment_file) # Fails self.assertTrue(default_storage.exists(upload_location)) # Fails Attachment Model: from django.db import models from parent_mixins import Parent_Mixin import uuid from django.db.models.signals import pre_delete,pre_save from dirtyfields import DirtyFieldsMixin def upload_to(instance,filename): return 'attachments/' + instance.directory + '/' + filename def uuid_directory_name(): return uuid.uuid4().hex class Attachment(DirtyFieldsMixin,Parent_Mixin,models.Model): attachment_file = models.FileField(blank=True,null=True,upload_to=upload_to) directory = models.CharField(blank=False,default=uuid_directory_name,null=False,max_length=32) name = models.CharField(blank=False,default=None,null=False,max_length=128) class Meta: app_label = 'core' def __str__(self): return unicode(self).encode('utf-8') def __unicode__(self): return unicode(self.name) @models.permalink def get_absolute_url(self): return('core_attachments_update',(),{'pk': self.pk}) # def save(self,*args,**kwargs): # super(Attachment,self).save(*args,**kwargs) def pre_delete_callback(sender, instance, *args, **kwargs): if not isinstance(instance, Attachment): return if not instance.attachment_file: return instance.attachment_file.delete(save=False) def pre_save_callback(sender, instance, *args, **kwargs): if not isinstance(instance, Attachment): return if not instance.attachment_file: return if instance.is_dirty(): dirty_fields = instance.get_dirty_fields() if 'attachment_file' in dirty_fields: old_attachment_file = dirty_fields['attachment_file'] old_attachment_file.delete() pre_delete.connect(pre_delete_callback) pre_save.connect(pre_save_callback) Attachment Form: from ..models.attachments import Attachment from crispy_forms.helper import FormHelper from crispy_forms.layout import Div,Layout,HTML,Field,Fieldset,Button,ButtonHolder,Submit from django import forms class AttachmentFormHelper(FormHelper): form_tag=False layout = Layout( Div( Div( Field('name',css_class='span4'), Field('attachment_file',css_class='span4'), css_class='span4', ), css_class='row', ), ) class AttachmentForm(forms.ModelForm): helper = AttachmentFormHelper() class Meta: fields=('attachment_file','name') model = Attachment class AttachmentInlineFormHelper(FormHelper): form_tag=False form_style='inline' layout = Layout( Div( Div( Field('name',css_class='span4'), Field('attachment_file',css_class='span4'), Field('DELETE',css_class='span4'), css_class='span4', ), css_class='row', ), ) class AttachmentInlineForm(forms.ModelForm): helper = AttachmentInlineFormHelper() class Meta: fields=('attachment_file','name') model = Attachment UPDATE I also do testing on the Attachment model class with these unit tests -- which all pass: from core.models.attachments import Attachment from core.models.attachments import upload_to from django.test import TestCase import unittest from django.core.files.storage import default_storage from django.core.files.base import ContentFile def suite(): return unittest.TestSuite( [ unittest.TestLoader().loadTestsFromTestCase(ModelTest), ] ) class ModelTest(TestCase): def test_model_minimum_fields(self): attachment = Attachment(name='name') attachment.attachment_file.save('test.txt',ContentFile("hello world")) attachment.save() self.assertEqual(str(attachment),'name') self.assertEqual(unicode(attachment),'name') self.assertTrue(attachment.directory) # def test_model_full_fields(self): # attachment = Attachment() # attachement.save() def test_file_operations_basic(self): root_directory = 'attachments' filename = 'test.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename,ContentFile('test')) attachment.save() upload_location = root_directory + '/' + attachment.directory + '/' + filename self.assertEqual(upload_to(attachment,filename),upload_location) self.assertTrue(default_storage.exists(upload_location)) def test_file_operations_delete(self): root_directory = 'attachments' filename = 'test.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename,ContentFile('test')) attachment.save() upload_location = upload_to(attachment,filename) attachment.delete() self.assertFalse(default_storage.exists(upload_location)) def test_file_operations_change(self): root_directory = 'attachments' filename_1 = 'test_1.txt' attachment = Attachment(name='name') attachment.attachment_file.save(filename_1,ContentFile('test')) attachment.save() upload_location_1 = upload_to(attachment,filename_1) self.assertTrue(default_storage.exists(upload_location_1)) filename_2 = 'test_2.txt' attachment.attachment_file.save(filename_2,ContentFile('test')) attachment.save() upload_location_2 = upload_to(attachment,filename_2) self.assertTrue(default_storage.exists(upload_location_2)) self.assertFalse(default_storage.exists(upload_location_1))

    Read the article

  • django image upload forms

    - by gramware
    I am having problems with django forms and image uploads. I have googled, read the documentations and even questions ere, but cant figure out the issue. Here are my files my models class UserProfile(User): """user with app settings. """ DESIGNATION_CHOICES=( ('ADM', 'Administrator'), ('OFF', 'Club Official'), ('MEM', 'Ordinary Member'), ) onames = models.CharField(max_length=30, blank=True) phoneNumber = models.CharField(max_length=15) regNo = models.CharField(max_length=15) designation = models.CharField(max_length=3,choices=DESIGNATION_CHOICES) image = models.ImageField(max_length=100,upload_to='photos/%Y/%m/%d', blank=True, null=True) course = models.CharField(max_length=30, blank=True, null=True) timezone = models.CharField(max_length=50, default='Africa/Nairobi') smsCom = models.BooleanField() mailCom = models.BooleanField() fbCom = models.BooleanField() objects = UserManager() #def __unicode__(self): # return '%s %s ' % (User.Username, User.is_staff) def get_absolute_url(self): return u'%s%s/%s' % (settings.MEDIA_URL, settings.ATTACHMENT_FOLDER, self.id) def get_download_url(self): return u'%s%s/%s' % (settings.MEDIA_URL, settings.ATTACHMENT_FOLDER, self.name) ... class reports(models.Model): repID = models.AutoField(primary_key=True) repSubject = models.CharField(max_length=100) repRecepients = models.ManyToManyField(UserProfile) repPoster = models.ForeignKey(UserProfile,related_name='repposter') repDescription = models.TextField() repPubAccess = models.BooleanField() repDate = models.DateField() report = models.FileField(max_length=200,upload_to='files/%Y/%m/%d' ) deleted = models.BooleanField() def __unicode__(self): return u'%s ' % (self.repSubject) my forms from django import forms from django.http import HttpResponse from cms.models import * from django.contrib.sessions.models import Session from django.forms.extras.widgets import SelectDateWidget class UserProfileForm(forms.ModelForm): class Meta: model= UserProfile exclude = ('designation','password','is_staff', 'is_active','is_superuser','last_login','date_joined','user_permissions','groups') ... class reportsForm(forms.ModelForm): repPoster = forms.ModelChoiceField(queryset=UserProfile.objects.all(), widget=forms.HiddenInput()) repDescription = forms.CharField(widget=forms.Textarea(attrs={'cols':'50', 'rows':'5'}),label='Enter Report Description here') repDate = forms.DateField(widget=SelectDateWidget()) class Meta: model = reports exclude = ('deleted') my views @login_required def reports_media(request): user = UserProfile.objects.get(pk=request.session['_auth_user_id']) if request.user.is_staff== True: repmedform = reportsForm(request.POST, request.FILES) if repmedform.is_valid(): repmedform.save() repmedform = reportsForm(initial = {'repPoster':user.id,}) else: repmedform = reportsForm(initial = {'repPoster':user.id,}) return render_to_response('staffrepmedia.html', {'repfrm':repmedform, 'rep_media': reports.objects.all()}) else: return render_to_response('reports_&_media.html', {'rep_media': reports.objects.all()}) ... @login_required def settingchng(request): user = UserProfile.objects.get(pk=request.session['_auth_user_id']) form = UserProfileForm(instance = user) if request.method == 'POST': form = UserProfileForm(request.POST, request.FILES, instance = user) if form.is_valid(): form.save() return HttpResponseRedirect('/settings/') else: form = UserProfileForm(instance = user) if request.user.is_staff== True: return render_to_response('staffsettingschange.html', {'form': form}) else: return render_to_response('settingschange.html', {'form': form}) ... @login_required def useradd(request): if request.method == 'POST': form = UserAddForm(request.POST,request.FILES ) if form.is_valid(): password = request.POST['password'] request.POST['password'] = set_password(password) form.save() else: form = UserAddForm() return render_to_response('staffadduser.html', {'form':form}) Example of my templates {% if form.errors %} <ol> {% for field in form %} <H3 class="title"> <p class="error"> {% if field.errors %}<li>{{ field.errors|striptags }}</li>{% endif %}</p> </H3> {% endfor %} </ol> {% endif %} <form method="post" id="form" action="" enctype="multipart/form-data" class="infotabs accfrm"> {{ repfrm.as_p }} <input type="submit" value="Submit" /> </form>

    Read the article

  • Vbscript - Creating a script that mirrors several sets of folders

    - by Kenny Bones
    Ok, this is my problem. I'm doing a logonscript that basically copies Microsoft Word templates from a serverpath on to a local path of each computer. This is done using a check for group membership. If MemberOf(ObjGroupDict, "g_group1") Then oShell.Run "%comspec% /c %LOGONSERVER%\SYSVOL\mydomain.com\scripts\ROBOCOPY \\server\Templates\Group1\OFFICE2003\ " & TemplateFolder & "\" & " * /E /XO", 0, True End If Previously I used the /MIR switch of robocopy, which is exellent. But, if a user is member of more than one group, the /MIR switch removes the content from the first group, since it's mirroring the content from the second group. Meaning, I can't have both contents. This is "solved" by not using the /MIR switch and just let the content get copied anyway. BUT the whole idea of having the templates on a server is so that I can control the content the users receive through the script. So if I delete a file or folder from the server path, this doesn't replicate on the local computer. Since I don't use the /MIR switch anymore. Comprende? So, what do I do? I did a small script that basically checks the folders and files and then removes them accordingly, but this actually ended up being the same functionality as the /MIR switch anyway. How do I solve this problem? Edit: I've found that what I actually need is a routine that scans my local template folder for files and folders and checks if the same structure exists in any of the source template folders. The server template folders are set up like this: \\fileserver\templates\group1\ \\fileserver\templates\group2\ \\fileserver\templates\group3\ \\fileserver\templates\group4\ \\fileserver\templates\group5\ \\fileserver\templates\group6\ And the script that does the copying is structures like this (pseudo): If User is MemberOf (group1) Then RoboCopy.exe \\fileserver\templates\group1\ c:\templates\workgroup *.* /E /XO End if If User is MemberOf (group2) Then RoboCopy.exe \\fileserver\templates\group2\ c:\templates\workgroup *.* /E /XO End if If User is MemberOf (group3) Then RoboCopy.exe \\fileserver\templates\group3\ c:\templates\workgroup *.* /E /XO End if Etc etc With the /E switch, I make sure it copies subfolders as well. And the /XO switch only copies files and folders that are newer than those in my local path. But it doesn't consider if the local path contains files or folders that doesn't exist on the server template path. So after the copying is done, I would like to check if any of the files or folders on my c:\templates\workgroup actually exists in either of the sources. And if they don't, delete them from my local path. Something that could be combined in these memberchecks perhaps?

    Read the article

  • How to build an offline web app using Flask?

    - by Rafael Alencar
    I'm prototyping an idea for a website that will use the HTML5 offline application cache for certain purposes. The website will be built with Python and Flask and that's where my main problem comes from: I'm working with those two for the first time, so I'm having a hard time getting the manifest file to work as expected. The issue is that I'm getting 404's from the static files included in the manifest file. The manifest itself seems to be downloaded correctly, but the files that it points to are not. This is what is spit out in the console when loading the page: Creating Application Cache with manifest http://127.0.0.1:5000/static/manifest.appcache offline-app:1 Application Cache Checking event offline-app:1 Application Cache Downloading event offline-app:1 Application Cache Progress event (0 of 2) http://127.0.0.1:5000/style.css offline-app:1 Application Cache Error event: Resource fetch failed (404) http://127.0.0.1:5000/style.css The error is in the last line. When the appcache fails even once, it stops the process completely and the offline cache doesn't work. This is how my files are structured: sandbox offline-app offline-app.py static manifest.appcache script.js style.css templates offline-app.html This is the content of offline-app.py: from flask import Flask, render_template app = Flask(__name__) @app.route('/offline-app') def offline_app(): return render_template('offline-app.html') if __name__ == '__main__': app.run(host='0.0.0.0', debug=True) This is what I have in offline-app.html: <!DOCTYPE html> <html manifest="{{ url_for('static', filename='manifest.appcache') }}"> <head> <title>Offline App Sandbox - main page</title> </head> <body> <h1>Welcome to the main page for the Offline App Sandbox!</h1> <p>Some placeholder text</p> </body> </html> This is my manifest.appcache file: CACHE MANIFEST /style.css /script.js I've tried having the manifest file in all different ways I could think of: CACHE MANIFEST /static/style.css /static/script.js or CACHE MANIFEST /offline-app/static/style.css /offline-app/static/script.js None of these worked. The same error was returned every time. I'm certain the issue here is how the server is serving up the files listed in the manifest. Those files are probably being looked up in the wrong place, I guess. I either should place them somewhere else or I need something different in the cache manifest, but I have no idea what. I couldn't find anything online about having HTML5 offline applications with Flask. Is anyone able to help me out?

    Read the article

  • Moving MVC2 Helpers to MVC3 razor view engine

    - by Dai Bok
    Hi, In my MVC 2 site, I have an html helper, that I use to add javascripts for my pages. In my master page I have the main javascripts I want to include, and then in the aspx pages, I include page specific javascripts. So for example, my Site.Master has something like this: .... <head> <%=html.renderScripts() %> </head> ... //core scripts for main page <%html.AddScript("/scripts/jquery.js") %> <%html.AddScript("/scripts/myLib.js") %> .... Then in the child aspx page, I may also want to include other scripts. ... //the page specific script I want to use <% html.AddScript("/scripts/register.aspx.js") %> ... So when the full page gets rendered the javascript files are all collected and rendered in the head by sitemaster placeholder function RenderScripts. This works fine. Now with MVC 3 and razor view engine, they layout pages behave differently, because now my page level javascripts are not rendered/included. Now all I see the LayoutMaster contents. How do I get the solution wo workwith MVC 3 and the razor view engine. (The helper has already been re-written to return a HTMLString ;-)) For reference: my MasterLayout looks like this: ... ... <head> @{ Html.AddJavaScript("/Scripts/jQuery.js"); Html.AddJavaScript("/Scripts/myLib.js"); } //Render scripts @html.RenderScripts() </head> .... and the child page looks like this: @{ Layout = "~/Views/Shared/MasterLayout.cshtml"; ViewBag.Title = "Child Page"; Html.AddJavaScript("/Scripts/register.aspx.js"); } .... <div>some html </div> Thanks for your help. Edit = Just to explain, if this question is not clear enough. When producing a "page" I collect all the javascript files the designers want to use, by using the html.addJavascript("filename.js") and store these in a dictionary - (1) stops people adding duplicate js files - then finally when the page is ready to render, I write out all the javascript files neatly in the header. (2) - this helper helps keep JS in one place, and prevents designers from adding javascript files all over the place. This used to work fine with Master/SiteMaster Pages in mvc 2. but how can I achieve this with razor?

    Read the article

  • IE6 CSS tooltip not appearing

    - by Lauren
    I'm using a tooltip that works in FF, Chrome, and IE7-8, but in IE6 it doesn't appear. You can go to this page http://www.avaline.com/ Bags/ Eco-Friendly-Bags/R1500 and login with [email protected] password:test02, then hit the "add to cart" button and hover over the question marks to see (or not see) the tooltips. This is the relevant HTML and CSS: <DIV class=oewBox id=oewImpLocDiv style="BACKGROUND-IMAGE: url(/images/img/org4.gif)"> <A class=tooltip href="#"><SPAN class=""><STRONG>More than 2 imprint locations?</STRONG> Test </SPAN></A> </DIV> <style> /* Rule from element "style" attribute */ element.style { BACKGROUND-IMAGE: url(/images/img/org4.gif) } /* Rule N°8 from inline stylesheet */ .oewBox { PADDING-RIGHT: 8px; PADDING-LEFT: 40px; PADDING-BOTTOM: 16px; MARGIN: 0px 0px 6px; PADDING-TOP: 6px; BORDER-BOTTOM: #ff7c14 3px solid } /* Rule N°7 from inline stylesheet */ .oewBox { BACKGROUND-POSITION: 0px 0px; BACKGROUND-IMAGE: none; BACKGROUND-REPEAT: no-repeat } /* Rule N°11 from /site/av-files/mainstyles.css */ A:active { COLOR: #3b88c4; TEXT-DECORATION: none } /* Rule N°10 from /site/av-files/mainstyles.css */ A:hover { COLOR: #000; TEXT-DECORATION: none } /* Rule N°9 from /site/av-files/mainstyles.css */ A:visited { COLOR: #3b88c4; TEXT-DECORATION: underline } /* Rule N°8 from /site/av-files/mainstyles.css */ A:link { COLOR: #3b88c4; TEXT-DECORATION: underline } /* Rule N°7 from /site/av-files/mainstyles.css */ A { COLOR: #3b88c4; TEXT-DECORATION: underline } /* Rule N°52 from inline stylesheet */ A.tooltip { BACKGROUND: url(/images/img/question.gif) no-repeat; FLOAT: right; WIDTH: 19px; HEIGHT: 20px } /* Rule N°54 from inline stylesheet */ A.tooltip:hover SPAN { BORDER-RIGHT: #ff7c14 1px solid; BORDER-TOP: #ff7c14 1px solid; DISPLAY: inline; BACKGROUND: #ffffff; BORDER-LEFT: #ff7c14 1px solid; COLOR: #000; BORDER-BOTTOM: #ff7c14 1px solid; POSITION: absolute } /* Rule N°53 from inline stylesheet */ A.tooltip SPAN { PADDING-RIGHT: 3px; DISPLAY: none; PADDING-LEFT: 3px; FONT-WEIGHT: normal; FONT-SIZE: 11px; PADDING-BOTTOM: 2px; MARGIN-LEFT: -245px; WIDTH: 230px; PADDING-TOP: 2px } </style>

    Read the article

  • Web SITE publishing, dynamic compilation, smoke & mirrors

    - by tbehunin
    When you publish a web SITE in Visual Studio, in the dialog box that follows, you are given an option to "Allow this precompiled site to be updatable". According to MSDN, checking this option "specifies that all program code is compiled into assemblies, but that .aspx files (including single-file ASP.NET Web pages) are copied as-is to the target folder". With this option checked, you can update existing .aspx files as well as add new ones without any issue. When a page, that has either been updated or newly created, is requested, the page gets dynamically compiled at run-time and is then processed and returned to the user. If, on the other hand, you didn't check that checkbox during the publish phase, the .aspx files get compiled, along with the code-behind and App_Code files in separate assemblies. The .aspx files are then completely overwritten with a line of text that says: This is a marker file generated by the precompilation tool, and should not be deleted! You obviously can't edit an existing page in this scenario. If you were to ADD a new .aspx file to this site, you would get a .Net run-time error saying that the file hasn't been precompiled. With that background, my questions are these: Something must be able to determine that this website was published to be updatable (allow dynamic compilation) or not. If it was published as updatable, it must also be able to determine whether a file was changed or added, so it can do a dynamic compile. Who makes those determinations? IIS? ASP.NET worker process? HOW does it make those determinations? If I had the same website published in both of those scenarios, could I make a visual determination that one is updatable and the other is not? Is there some bit I can look at in the assemblies using Reflector to make that determination myself? In addition to answering those questions, what also might be helpful would be information on the process flow from when a resource is requested to when it starts being processed, not necessarily the ASP.NET Page Lifecycle, but what happens BEFORE ASP.Net worker process starts processing the page and firing off events. The dynamic compilation appears to be smoke and mirrors. Can someone demystify this for me?

    Read the article

  • Setting up Netbeans/Eclipse for Linux Kernel Development

    - by red.october
    Hi: I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include: files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions. cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem. can't find header files: self-explanatory I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion. Let me preface this question with some points: I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them. As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine SO's useful "Related Questions" feature has informed me that the following question is related: http://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers. Cheers

    Read the article

  • Why did File::Find finish short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system. Here is the meat of the script; I don't think including DBI stuff would be relevant since I reran the script with nothing but a counter in process_img() which returned the same number. find(\&process_img, $path_from); sub process_img { eval { return if ($_ eq "." or $_ eq ".."); ## Omitted querying and composing new paths for brevity. make_path("$path_to\\img\\$dir_area\\$dir_address\\$type"); copy($File::Find::name, "$path_to\\img\\$dir_area\\$dir_address\\$type\\$new_name"); }; if ($@) { print STDERR "eval barks: $@\n"; return } } And here is another method I used to count files: count_images($path_from); sub count_images { my $path = shift; opendir my $images, $path or die "died opening $path"; while (my $item = readdir $images) { next if $item eq '.' or $item eq '..'; $img_counter++ && next if -f "$path/$item"; count_images("$path/$item") if -d "$path/$item"; } closedir $images or die "died closing $path"; } print $img_counter;

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • DataGridView cells not editable when using an outside thread call

    - by joslinm
    Hi, I'm not able to edit my datagridview cells when a number of identical calls takes place on another thread. Here's the situation: Dataset table is created in the main window The program receives in files and processes them on a background thread in class TorrentBuilder : BackgroundWorker creating an array objects of another class Torrent My program receives those objects from the BW result and adds them into the dataset The above happens either on my main window thread or in another thread: I have a separate thread watching a folder for files to come in, and when they do come in, they proceed to call TorrentBuilder.RunWorkerAsynch() from that thread, receive the result, and call an outside class that adds the Torrent objects into the table. When the files are received by the latter thread, the datagridview isn't editable. All of the values come up properly into the datagridview, but when I click on a cell to edit it: I can write letters and everything, but when I click out of it, it immediately reverts back to its original value. If I restart the program, I can edit the same cells just fine. If the values are freshly added from the main window thread, I can edit the cells just fine. The outside thread is called from my main window thread, and sits there in the background. I don't believe it to be ReadOnly because I would have gotten an exception. Here's some code: From my main window class: private void dataGridView_DragDrop(object sender, DragEventArgs e) { ArrayList al = new ArrayList(); string[] files = (string[])e.Data.GetData(DataFormats.FileDrop); foreach (string file in files) { string extension = Path.GetExtension(file); if (Path.GetExtension(file).Equals(".zip") || Path.GetExtension(file).Equals(".rar")) { foreach (string unzipped in dh.UnzipFile(file)) al.Add(unzipped); } else if (Path.GetExtension(file).Equals(".torrent")) { al.Add(file); } } dataGridViewProgressBar.Visible = true; tb.RunWorkerCompleted += new RunWorkerCompletedEventHandler(tb_DragDropCompleted); tb.ProgressChanged += new ProgressChangedEventHandler(tb_DragDropProgress); tb.RunWorkerAsync() } void tb_DragDropCompleted(object sender, RunWorkerCompletedEventArgs e) { data.AddTorrents((Torrent[])e.Result); builder.Dispose(); dh.MoveProcessedFiles(data); dataGridViewProgressBar.Visible = false; } From my outside Thread while (autocheck) { if (torrentFiles != null) { builder.RunWorkerAsync(torrentFiles); while (builder.IsBusy) Thread.Sleep(500); } } void builder_RunWorkerCompleted(object sender, System.ComponentModel.RunWorkerCompletedEventArgs e) { data.AddTorrents((Torrent[])e.Result); builder.Dispose(); dh.MoveProcessedFiles(xml); data.Save(); //Save just does an `AcceptChanges()` and saves to a XML file }

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Unable to boot Windows 7 after installing Ubuntu

    - by Devendra
    I have Windows 7 on my machine and then installed Ubuntu 12.04 using a live CD. I can see both Windows 7 and Ubuntu in the grub menu, but when I select Windows 7 it shows a black screen for about 2 seconds and the returns to the Grub menu. But if I select Ubuntu it's working fine. This is the contents of the boot-repair log: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info November 20th 2012] ============================= Boot Info Summary: =============================== => Grub2 (v2.00) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos6)/boot/grub. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Grub2 (v1.99-2.00) Boot sector info: Grub2 (v2.00) is installed in the boot sector of sda1 and looks at sector 388911128 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for (,msdos6)/boot/grub. No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda4: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 2048. Operating System: Boot files: sda6: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.10 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/i386-pc/core.img sda7: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 206,848 146,802,687 146,595,840 7 NTFS / exFAT / HPFS /dev/sda2 147,007,488 293,623,807 146,616,320 7 NTFS / exFAT / HPFS /dev/sda3 293,623,808 332,820,613 39,196,806 7 NTFS / exFAT / HPFS /dev/sda4 332,822,526 1,465,145,343 1,132,322,818 f W95 Extended (LBA) /dev/sda5 461,342,720 1,465,145,343 1,003,802,624 7 NTFS / exFAT / HPFS /dev/sda6 332,822,528 453,171,199 120,348,672 83 Linux /dev/sda7 453,173,248 461,338,623 8,165,376 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 F6AE2C13AE2BCB47 ntfs /dev/sda2 DC2273012272DFC6 ntfs /dev/sda3 1E76E43376E40D79 ntfs New Volume /dev/sda5 5ED60ACDD60AA57D ntfs /dev/sda6 9e70fd16-b48b-4f88-adcf-e443aef83124 ext4 /dev/sda7 52f3dd94-6be7-4a7b-a3ae-f43eb8810483 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda6 / ext4 (rw,errors=remount-ro) =========================== sda6/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_IN insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=10 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.5.0-17-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-9e70fd16-b48b-4f88-adcf-e443aef83124' { menuentry 'Ubuntu, with Linux 3.5.0-17-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-advanced-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } menuentry 'Ubuntu, with Linux 3.5.0-17-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-17-generic-recovery-9e70fd16-b48b-4f88-adcf-e443aef83124' { recordfail insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi echo 'Loading Linux 3.5.0-17-generic ...' linux /boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-17-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='hd0,msdos6' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 9e70fd16-b48b-4f88-adcf-e443aef83124 else search --no-floppy --fs-uuid --set=root 9e70fd16-b48b-4f88-adcf-e443aef83124 fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Windows 7 (loader) (on /dev/sda1)' --class windows --class os $menuentry_id_option 'osprober-chain-F6AE2C13AE2BCB47' { insmod part_msdos insmod ntfs set root='hd0,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 F6AE2C13AE2BCB47 else search --no-floppy --fs-uuid --set=root F6AE2C13AE2BCB47 fi chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda6/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda6 during installation UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=52f3dd94-6be7-4a7b-a3ae-f43eb8810483 none swap sw 0 0 -------------------------------------------------------------------------------- =================== sda6: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 162.831275940 = 174.838751232 boot/grub/grub.cfg 1 163.036647797 = 175.059267584 boot/initrd.img-3.5.0-17-generic 1 206.871749878 = 222.126850048 boot/vmlinuz-3.5.0-17-generic 1 163.036647797 = 175.059267584 initrd.img 1 163.036647797 = 175.059267584 initrd.img.old 1 206.871749878 = 222.126850048 vmlinuz 1 =============================== StdErr Messages: =============================== cat: write error: Broken pipe cat: write error: Broken pipe ADDITIONAL INFORMATION : =================== log of boot-repair 2012-12-11__00h59 =================== boot-repair version : 3.195~ppa28~quantal boot-sav version : 3.195~ppa28~quantal glade2script version : 3.2.2~ppa45~quantal boot-sav-extra version : 3.195~ppa28~quantal boot-repair is executed in installed-session (Ubuntu 12.10, quantal, Ubuntu, x86_64) CPU op-mode(s): 32-bit, 64-bit BOOT_IMAGE=/boot/vmlinuz-3.5.0-17-generic root=UUID=9e70fd16-b48b-4f88-adcf-e443aef83124 ro quiet splash vt.handoff=7 =================== os-prober: /dev/sda6:The OS now in use - Ubuntu 12.10 CurrentSession:linux /dev/sda1:Windows 7 (loader):Windows:chain =================== blkid: /dev/sda1: UUID="F6AE2C13AE2BCB47" TYPE="ntfs" /dev/sda2: UUID="DC2273012272DFC6" TYPE="ntfs" /dev/sda3: LABEL="New Volume" UUID="1E76E43376E40D79" TYPE="ntfs" /dev/sda5: UUID="5ED60ACDD60AA57D" TYPE="ntfs" /dev/sda6: UUID="9e70fd16-b48b-4f88-adcf-e443aef83124" TYPE="ext4" /dev/sda7: UUID="52f3dd94-6be7-4a7b-a3ae-f43eb8810483" TYPE="swap" 1 disks with OS, 2 OS : 1 Linux, 0 MacOS, 1 Windows, 0 unknown type OS. Warning: extended partition does not start at a cylinder boundary. DOS and Linux will interpret the contents differently. =================== /etc/default/grub : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" =================== /etc/grub.d/ : drwxr-xr-x 2 root root 4096 Oct 17 20:29 grub.d total 72 -rwxr-xr-x 1 root root 7541 Oct 14 23:06 00_header -rwxr-xr-x 1 root root 5488 Oct 4 15:00 05_debian_theme -rwxr-xr-x 1 root root 10891 Oct 14 23:06 10_linux -rwxr-xr-x 1 root root 10258 Oct 14 23:06 20_linux_xen -rwxr-xr-x 1 root root 1688 Oct 11 19:40 20_memtest86+ -rwxr-xr-x 1 root root 10976 Oct 14 23:06 30_os-prober -rwxr-xr-x 1 root root 1426 Oct 14 23:06 30_uefi-firmware -rwxr-xr-x 1 root root 214 Oct 14 23:06 40_custom -rwxr-xr-x 1 root root 216 Oct 14 23:06 41_custom -rw-r--r-- 1 root root 483 Oct 14 23:06 README =================== UEFI/Legacy mode: This installed-session is not in EFI-mode. EFI in dmesg. Please report this message to [email protected] [ 0.000000] ACPI: UEFI 00000000bafe7000 0003E (v01 DELL QA09 00000002 PTL 00000002) [ 0.000000] ACPI: UEFI 00000000bafe6000 00042 (v01 PTL COMBUF 00000001 PTL 00000001) [ 0.000000] ACPI: UEFI 00000000bafe3000 00256 (v01 DELL QA09 00000002 PTL 00000002) SecureBoot disabled. =================== PARTITIONS & DISKS: sda6 : sda, not-sepboot, grubenv-ok grub2, grub-pc , update-grub, 64, with-boot, is-os, not--efi--part, fstab-without-boot, fstab-without-efi, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, apt-get, grub-install, with--usr, fstab-without-usr, not-sep-usr, standard, farbios, . sda1 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, is-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, haswinload, no-recov-nor-hid, bootmgr, is-winboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, not-far, /mnt/boot-sav/sda1. sda2 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda2. sda3 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda3. sda5 : sda, not-sepboot, no-grubenv nogrub, no-docgrub, no-update-grub, 32, no-boot, no-os, not--efi--part, part-has-no-fstab, part-has-no-fstab, no-nt, no-winload, no-recov-nor-hid, no-bmgr, notwinboot, nopakmgr, nogrubinstall, no---usr, part-has-no-fstab, not-sep-usr, standard, farbios, /mnt/boot-sav/sda5. sda : not-GPT, BIOSboot-not-needed, has-no-EFIpart, not-usb, has-os, 2048 sectors * 512 bytes =================== parted -l: Model: ATA WDC WD7500BPKT-7 (scsi) Disk /dev/sda: 750GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 106MB 75.2GB 75.1GB primary ntfs boot 2 75.3GB 150GB 75.1GB primary ntfs 3 150GB 170GB 20.1GB primary ntfs 4 170GB 750GB 580GB extended lba 6 170GB 232GB 61.6GB logical ext4 7 232GB 236GB 4181MB logical linux-swap(v1) 5 236GB 750GB 514GB logical ntfs =================== parted -lm: BYT; /dev/sda:750GB:scsi:512:4096:msdos:ATA WDC WD7500BPKT-7; 1:106MB:75.2GB:75.1GB:ntfs::boot; 2:75.3GB:150GB:75.1GB:ntfs::; 3:150GB:170GB:20.1GB:ntfs::; 4:170GB:750GB:580GB:::lba; 6:170GB:232GB:61.6GB:ext4::; 7:232GB:236GB:4181MB:linux-swap(v1)::; 5:236GB:750GB:514GB:ntfs::; =================== mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) gvfsd-fuse on /run/user/dev/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=dev) /dev/sda1 on /mnt/boot-sav/sda1 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda2 on /mnt/boot-sav/sda2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda3 on /mnt/boot-sav/sda3 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sda5 on /mnt/boot-sav/sda5 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) =================== ls: /sys/block/sda (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro sda1 sda2 sda3 sda4 sda5 sda6 sda7 size slaves stat subsystem trace uevent /sys/block/sr0 (filtered): alignment_offset bdi capability dev device discard_alignment events events_async events_poll_msecs ext_range holders inflight power queue range removable ro size slaves stat subsystem trace uevent /dev (filtered): alarm ashmem autofs binder block bsg btrfs-control bus cdrom cdrw char console core cpu cpu_dma_latency disk dri dvd dvdrw ecryptfs fb0 fb1 fd full fuse hpet input kmsg kvm log mapper mcelog mei mem net network_latency network_throughput null oldmem port ppp psaux ptmx pts random rfkill rtc rtc0 sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sg0 sg1 shm snapshot snd sr0 stderr stdin stdout uinput urandom v4l vga_arbiter vhost-net video0 zero ls /dev/mapper: control =================== df -Th: Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 ext4 57G 2.7G 51G 6% / udev devtmpfs 1.9G 12K 1.9G 1% /dev tmpfs tmpfs 770M 892K 769M 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 260K 1.9G 1% /run/shm none tmpfs 100M 44K 100M 1% /run/user /dev/sda1 fuseblk 70G 36G 35G 51% /mnt/boot-sav/sda1 /dev/sda2 fuseblk 70G 66G 4.8G 94% /mnt/boot-sav/sda2 /dev/sda3 fuseblk 19G 87M 19G 1% /mnt/boot-sav/sda3 /dev/sda5 fuseblk 479G 436G 44G 92% /mnt/boot-sav/sda5 =================== fdisk -l: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x1dc69d0b Device Boot Start End Blocks Id System /dev/sda1 * 206848 146802687 73297920 7 HPFS/NTFS/exFAT /dev/sda2 147007488 293623807 73308160 7 HPFS/NTFS/exFAT /dev/sda3 293623808 332820613 19598403 7 HPFS/NTFS/exFAT /dev/sda4 332822526 1465145343 566161409 f W95 Ext'd (LBA) Partition 4 does not start on physical sector boundary. /dev/sda5 461342720 1465145343 501901312 7 HPFS/NTFS/exFAT /dev/sda6 332822528 453171199 60174336 83 Linux /dev/sda7 453173248 461338623 4082688 82 Linux swap / Solaris Partition table entries are not in disk order =================== Recommended repair Recommended-Repair This setting will reinstall the grub2 of sda6 into the MBR of sda. Additional repair will be performed: unhide-bootmenu-10s grub-install (GRUB) 2.00-7ubuntu11,grub-install (GRUB) 2. Reinstall the GRUB of sda6 into the MBR of sda Installation finished. No error reported. grub-install /dev/sda: exit code of grub-install /dev/sda:0 update-grub Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.5.0-17-generic Found initrd image: /boot/initrd.img-3.5.0-17-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Unhide GRUB boot menu in sda6/boot/grub/grub.cfg Boot successfully repaired. You can now reboot your computer. The boot files of [The OS now in use - Ubuntu 12.10] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition)

    Read the article

  • Failed to install GNOME3 with 404 error

    - by Neon
    I followed the instructions here, however I got 404 not found error checking for updates: W:Failed to fetch http://ppa.launchpad.net/ubuntugnometeam/gnome3/ubuntu/dists/natty/main/source/Sources 404 Not Found, W:Failed to fetch http://ppa.launchpad.net/ubuntugnometeam/gnome3/ubuntu/dists/natty/main/binary-i386/Packages 404 Not Found, E:Some index files failed to download. They have been ignored, or old ones used instead. , Thus I couldn't even start the installation. Anybody know how to perform a full installation?

    Read the article

  • An XEvent a Day (6 of 31) – Targets Week – asynchronous_file_target

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - ring_buffer , looked at the ring_buffer Target in Extended Events and how it outputs the raw Event data in an XML document.  Today I’m going to go over the details of the other Target in Extended Events that captures raw Event data, the asynchronous_file_target. What is the asynchronous_file_target? The asynchronous_file_target holds the raw format Event data in a proprietary binary file format that persists beyond server restarts and can be provided to another...(read more)

    Read the article

  • remove duplicate source entry [closed]

    - by yosa
    Possible Duplicate: Duplicate sources.list entry but cannot find the duplicates? This is my source.list and seems fine to me # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 11.10]/ natty main restricted # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release i386 (20110427.1)]/ natty main restricted # deb cdrom:[Ubuntu 11.10 _Oneiric Ocelot_ - Release amd64 (20111012)]/ dists/oneiric/main/binary-i386/ # deb cdrom:[Ubuntu 11.10 _Oneiric Ocelot_ - Release amd64 (20111012)]/ oneiric main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb-src http://ma.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb http://archive.ubuntu.com/ubuntu precise-updates restricted main multiverse universe deb http://security.ubuntu.com/ubuntu/ precise-security restricted main multiverse universe deb http://archive.ubuntu.com/ubuntu precise main universe deb-src http://extras.ubuntu.com/ubuntu precise main # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb-src http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb-src http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://packages.dotdeb.org stable all # deb-src http://packages.dotdeb.org stable all # deb http://ppa.launchpad.net/bean123ch/burg/ubuntu lucid main # deb-src http://ppa.launchpad.net/bean123ch/burg/ubuntu lucid main this is the error given by apt-get update which stops at 64% reading W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/main amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_main_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/universe amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/restricted amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/restricted i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-i386_Packages)

    Read the article

  • System.Data.SQLite

    - by csharp-source.net
    System.Data.SQLite is an enhanced version of the original SQLite database engine. It is a complete drop-in replacement for the original sqlite3.dll (you can even rename it to sqlite3.dll). It has no linker dependency on the .NET runtime so it can be distributed independently of .NET, yet embedded in the binary is a complete ADO.NET 2.0 provider for full managed development.

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >