Search Results

Search found 39200 results on 1568 pages for 'zip files'.

Page 226/1568 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • mac osx software always order files alphabetically rather than by type.

    - by george
    i have noticed many mac applications sort the files alphabetically rather than by type. a good example would be coda by panic.com the files in the file menu are organized alphabetically, i requested for them to add the feature to organize files by type and they've said that its a finder thing. so i went in search to look at other applications to see if they were organizing by type i noticed dreamweaver cs4 had this same problem and now including dreamweaver cs5. there has to be something in the mac that does this and that i can modify. i played with spotlight and it now displays its files by type (thinking thats what i can do) but it didnt take affect in other applications. what library are these applications using to display a file menu for its files?

    Read the article

  • How to edit the list of files to download in a .torrent file?

    - by Forbidden Overseer
    I have a torrent that contains many files in it, but I don't want to download all of them because I need only few files among them. I am not downloading from a torrent client, but I am actually using put.io service that converts torrents into direct downloads for me. I want to edit the .torrent file (or magnet link) in such a way that I get only those files which I need from the torrent. I tried by editing with text editors, but I am getting some crazy hash errors when I did that. I have seen people editing trackers in a torrent file (using online editors and other editors), but none of them edited the list of files in the torrent file. So, the question is: How to edit a .torrent file (and save it back as another .torrent file) such that when you run that torrent file, it only downloads only those files you need?

    Read the article

  • How can I add metadata to NTFS files/folders?

    - by Pwdr
    I want to tag different file types (i.e. .pdf, .epub, .iso, .bin, folders,..) using the same descriptive fields. For example i would like a metadata field "type" which would be "eBook" on pdf- and epub-files, "CD-Image" on iso- and bin-files. I read about Alternate Data Streams (ADS) to make this possible. Does anyone know a good program for Windows 7 to tag different files and search for them? It is important for me, that the metadata is NOT stored in a separate database. I move the files a lot and need to stay flexible (ADSs 'stick' to the files). Any ideas?

    Read the article

  • Should all my files be stored on my shared partition?

    - by James
    I am setting up a tripple boot HD and was going to use a 4th partition to share files between OS's. I was wondering if there is any point in having much space on each OS partition to store files or if I just make the shared partition big and put everything on that? Is there any difference in speed between accessing files on the shared partition vs the native files? Are there any other benefits/disadvantages of having files on either the native/shared partition? EDIT: OS's in question are Windows 7, Ubuntu 12.04, and OS X 10.7.4.

    Read the article

  • Is it possible to dump the names of all the open files in notepad++ to a file?

    - by mark
    So, I dragged and dropped multiple files onto notepad++. The files came from different directories and were selected using different criteria. So, I have many files open in Notepad++. Now I need to have a list of all the open files in another file. Right now, my only option is to script the decisions used to guide me in selecting the files in the first place. Which is probably the best in the long term, but I wonder if there is a quicky one in Notepad++. Some plugin magic or whatever. Suggesting another free editor which has this function is a good option too (not that I am going to ditch notepad++, God forbid)

    Read the article

  • Is there a way I can choose what individual files to download within a .torrent?

    - by Valamas
    Is there a way to get the list of files under the files tab without the download starting? Sometimes I am after a single file within a torrent, however, to see the list, I need to start downloading. I then select the only file I want to download, but the other files which started to download and were stopped still will appear on the completed side even if 1k/600k of that file has downloaded. You can imagine the mess and confusion this can cause. So, is there a way I can instruct my BitTorrent/uTorrent program to just grab the file list? Solution: You must download the .torrent file. This contains the list of files. If you download using a magnet link, you do not get the list of files until the torrent starts.

    Read the article

  • DFS Replication, Users HOME folder - seems not to catch all files... any hints?

    - by TomTom
    I amm moving stuff out of a file Server. I am using DFS for that - the Folders are anyway in a DFS tree, so I can set up a replication temporarily, then drop the old Folder. Works nice, EXCEPT for the Folder containing the users home drives. Which, incidentally, is also the one I can not see all files in due to my permissions. Small Setup. We have 159mb in the users directories, 1280 files, 133 Folders original. The copy only has 157mb, 1269 files, 133 Folders. Anyone knwos of a way to find out what files are missing? IS this a Problem (could be some Caching files that are regenerated). Users are all offline (weekend) ;) This is pretty much the last share - all others had exactly ZERO issues.

    Read the article

  • What's faster, cp -R or unpacking tar.gz files?

    - by Buttle Butkus
    I have some tar.gz files that total many gigabytes on a CentOS system. Most of the tar.gz files are actually pretty small, but the ones with images are large. One is 7.7G, another is about 4G, and a couple around 1G. I have unpacked the files once already and now I want a second copy of all those files. I assumed that copying the unpacked files would be faster than re-unpacking them. But I started running cp -R about 10 minutes ago and so far less than 500M is copied. I feel certain that the unpacking process was faster. Am I right? And if so, why? It doesn't seem to make sense that unpacking would be faster than simply duplicating existing structures.

    Read the article

  • Help With Database Layout

    - by three3
    Hello everyone, I am working on a site similar to Craigslist where users can make postings and sell items in different cities. One difference between my site and Craigslist will be you will be able to search by zip code instead of having all of the cities listed on the page. I already have the ZIP Code database that has all of the city, state, latitude, longitude, and zip code info for each city. Okay, so to dive into what I need done and what I need help with: 1.) Although I have the ZIP Code database, it is not setup perfectly for my use. (I downloaded it off of the internet for free from http://zips.sourceforge.net/) 2.) I need help setting up my database structure (Ex: How many different tables should I use and how should I link them) I will be using PHP and MySQL. These our my thoughts so far on how the database can be setup: (I am not sure if this will work though.) Scenario: Someone goes to the homepage and it will tell them, "Please enter your ZIP Code.". If they enter "17241" for example, this ZIP Code is for a city named Newville located in Pennsylvania. The query would look like this with the current database setup: SELECT city FROM zip_codes WHERE zip = 17241; The result of the query would be "Newville". The problem I see here now is when they want to post something in the Newville section of the site, I will have to have an entire table setup just for the Newville city postings. There are over 42,000 cities which means I would have to have over 42,000 tables (one for each city) so that would be insane to have to do it that way. One way I was thinking of doing it was to add a column to the ZIP Code database called "city_id" which would be a unique number assigned to each city. So for example, the city Newville would have a city_id of 83. So now if someone comes and post a listing in the city Newville I would only need one other table. That one other table would be setup like this: CREATE TABLE postings ( posting_id INT NOT NULL AUTO_INCREMENT, for_sale LONGTEXT NULL, for_sale_date DATETIME NULL, for_sale_city_id INT NULL, jobs LONGTEXT NULL, jobs_date DATETIME NULL, jobs_city_id INT NULL, PRIMARY KEY(posting_id) ); (The for_sale and job_ column names are categories of the types of postings users will be able to list under. There will be many more categories than just those two but this is just for example.) So now when when someone comes to the website and they are looking for something to buy and not sell, they can enter their ZIP Code, 17241, for example, and this is the query that will run: SELECT city, city_id FROM zip_codes WHERE zip = 17241; //Result: Newville 83 (Please note that I will be using PHP to store the ZIP Code the user enters in SESSIONS and Cookies to remember them throughout the site) Now it will tell them, "Please choose your category.". If they choose the category "Items For Sale" then this is the query to run and sort the results: SELECT posting_id, for_sale, for_sale_date FROM postings WHERE for_sale_city_id = $_SESSION['zip_code']; Will this work? So now my question to everyone is will this work? I am pretty sure it will but I do not want to set this thing up and realize I overlooked something and have to start from all over from scratch. Any opinions and ideas are welcomed and I will listen to anyone who has some thoughts. I really appreciate the help in advance :D

    Read the article

  • Visual Studio compiles WPF application twice during build

    - by Brian Ensink
    I have a WPF app in VS2008 that compiles twice during the build. The two CSC command lines are similar but with some differences. The first CSC command line does not have an /resource options, the second has two /resource options on the command line. The second CSC command line has these additional arguments: /resource:"obj\Debug AutoCAD\VisualApp.g.resources" /resource:"obj\Debug AutoCAD\CAP.Visual.Properties.Resources.resources" I hate to post such a huge ugly compiler output but here are both command lines. 2>c:\WINDOWS\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 /platform:x86 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /reference:..\BIN\RELEASE\FOO.Base.dll /reference:..\BIN\RELEASE\FOO.CAPArchiveHandler.dll /reference:..\BIN\RELEASE\FOO.CAPDOM.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationCore.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationFramework.dll" /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.DataSetExtensions.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.Runtime.Serialization.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.Docking.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.Navigation.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\UIAutomationProvider.dll" /reference:c:\project\FooStudio\BIN\DEBUGCAD\VS-3DEngine-Wrapper.dll /reference:c:\project\FooStudio\BIN\DEBUGCAD\VisualServiceClient.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\WindowsBase.dll" /debug+ /debug:full /filealign:512 /out:"obj\Debug AutoCAD\VisualApp.exe" /target:winexe App.xaml.cs MainWindow.xaml.cs CameraAndLightingControl.xaml.cs CameraAndLightingViewModel.cs MainWindowViewModel.cs Properties\AssemblyInfo.cs Properties\Resources.Designer.cs Properties\Settings.Designer.cs ScenarioToolsWindow.xaml.cs SceneGraph.cs ScenePart.cs ToolWindow.xaml.cs "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\CameraAndLightingControl.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\MainWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\ScenarioToolsWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\ToolWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\App.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\GeneratedInternalTypeHelper.g.cs" 2>Done building project "0ye0i4wb.tmp_proj". 2>c:\WINDOWS\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 /platform:x86 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /reference:..\BIN\RELEASE\FOO.Base.dll /reference:..\BIN\RELEASE\FOO.CAPArchiveHandler.dll /reference:..\BIN\RELEASE\FOO.CAPDOM.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationCore.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\PresentationFramework.dll" /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.DataSetExtensions.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.Runtime.Serialization.dll" /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:"c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.Docking.dll" /reference:"C:\Program Files\Telerik\RadControls for WPF Q1 2010\Binaries\WPF\Telerik.Windows.Controls.Navigation.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\UIAutomationProvider.dll" /reference:c:\project\FooStudio\BIN\DEBUGCAD\VS-3DEngine-Wrapper.dll /reference:c:\project\FooStudio\BIN\DEBUGCAD\VisualServiceClient.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\WindowsBase.dll" /debug+ /debug:full /filealign:512 /out:"obj\Debug AutoCAD\VisualApp.exe" /resource:"obj\Debug AutoCAD\VisualApp.g.resources" /resource:"obj\Debug AutoCAD\FOO.Visual.Properties.Resources.resources" /target:winexe App.xaml.cs MainWindow.xaml.cs CameraAndLightingControl.xaml.cs CameraAndLightingViewModel.cs MainWindowViewModel.cs Properties\AssemblyInfo.cs Properties\Resources.Designer.cs Properties\Settings.Designer.cs ScenarioToolsWindow.xaml.cs SceneGraph.cs ScenePart.cs ToolWindow.xaml.cs "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\CameraAndLightingControl.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\MainWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\ScenarioToolsWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\ToolWindow.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\App.g.cs" "c:\project\FooStudio\VisualApp\obj\Debug AutoCAD\GeneratedInternalTypeHelper.g.cs" Any idea what could possibly cause this? I think this is causing a problem I posted about earlier today.

    Read the article

  • Samplegrabber works fine on AVI/MPEG files but choppy with WMV

    - by jomtois
    I have been using the latest version of the WPFMediaKit. What I am trying to do is write a sample application that will use the Samplegrabber to capture the video frames of video files so I can have them as individual Bitmaps. So far, I have had good luck with the following code when constructing and rendering my graph. However, when I use this code to play back a .wmv video file, when the samplegrabber is attached, it will play back jumpy or choppy. If I comment out the line where I add the samplegrabber filter, it works fine. Again, it works with the samplegrabber correctly with AVI/MPEG, etc. protected virtual void OpenSource() { FrameCount = 0; /* Make sure we clean up any remaining mess */ FreeResources(); if (m_sourceUri == null) return; string fileSource = m_sourceUri.OriginalString; if (string.IsNullOrEmpty(fileSource)) return; try { /* Creates the GraphBuilder COM object */ m_graph = new FilterGraphNoThread() as IGraphBuilder; if (m_graph == null) throw new Exception("Could not create a graph"); /* Add our prefered audio renderer */ InsertAudioRenderer(AudioRenderer); var filterGraph = m_graph as IFilterGraph2; if (filterGraph == null) throw new Exception("Could not QueryInterface for the IFilterGraph2"); IBaseFilter renderer = CreateVideoMixingRenderer9(m_graph, 1); IBaseFilter sourceFilter; /* Have DirectShow find the correct source filter for the Uri */ var hr = filterGraph.AddSourceFilter(fileSource, fileSource, out sourceFilter); DsError.ThrowExceptionForHR(hr); /* We will want to enum all the pins on the source filter */ IEnumPins pinEnum; hr = sourceFilter.EnumPins(out pinEnum); DsError.ThrowExceptionForHR(hr); IntPtr fetched = IntPtr.Zero; IPin[] pins = { null }; /* Counter for how many pins successfully rendered */ int pinsRendered = 0; m_sampleGrabber = (ISampleGrabber)new SampleGrabber(); SetupSampleGrabber(m_sampleGrabber); hr = m_graph.AddFilter(m_sampleGrabber as IBaseFilter, "SampleGrabber"); DsError.ThrowExceptionForHR(hr); /* Loop over each pin of the source filter */ while (pinEnum.Next(pins.Length, pins, fetched) == 0) { if (filterGraph.RenderEx(pins[0], AMRenderExFlags.RenderToExistingRenderers, IntPtr.Zero) >= 0) pinsRendered++; Marshal.ReleaseComObject(pins[0]); } Marshal.ReleaseComObject(pinEnum); Marshal.ReleaseComObject(sourceFilter); if (pinsRendered == 0) throw new Exception("Could not render any streams from the source Uri"); /* Configure the graph in the base class */ SetupFilterGraph(m_graph); HasVideo = true; /* Sets the NaturalVideoWidth/Height */ //SetNativePixelSizes(renderer); } catch (Exception ex) { /* This exection will happen usually if the media does * not exist or could not open due to not having the * proper filters installed */ FreeResources(); /* Fire our failed event */ InvokeMediaFailed(new MediaFailedEventArgs(ex.Message, ex)); } InvokeMediaOpened(); } And: private void SetupSampleGrabber(ISampleGrabber sampleGrabber) { FrameCount = 0; var mediaType = new AMMediaType { majorType = MediaType.Video, subType = MediaSubType.RGB24, formatType = FormatType.VideoInfo }; int hr = sampleGrabber.SetMediaType(mediaType); DsUtils.FreeAMMediaType(mediaType); DsError.ThrowExceptionForHR(hr); hr = sampleGrabber.SetCallback(this, 0); DsError.ThrowExceptionForHR(hr); } I have read a few things saying the the .wmv or .asf formats are asynchronous or something. I have attempted inserting a WMAsfReader to decode which works, but once it goes to the VMR9 it gives the same behavior. Also, I have gotten it to work correctly when I comment out the IBaseFilter renderer = CreateVideoMixingRenderer9(m_graph, 1); line and have filterGraph.Render(pins[0]); -- the only drawback is that now it renders in an Activemovie widow of its own instead of my control, however the samplegrabber functions correctly and without any skipping. So I am thinking the bug is in the VMR9 / samplegrabbing somewhere. Any help? I am new to this.

    Read the article

  • Merge 2 XML files with xslt

    - by MADAL
    Hello, I try to merge 2 XML-Files. I use XSLT: [http://www2.informatik.hu-berlin.de/...erge.xslt.html] I need to change this XSLT-File in order to have another result. My first file, which needs to be merged: <A1> <A2> <A3> <b>a</b> <c>b</c> </A3> </A2> </A1> <A1> <A2> <A3> <b>1</b> <c>2</c> </A3> </A2> </A1> ... My 2. file, which needs to be merged whithe the first one: <A1> <A2> <A3> <b>x</b> <c>y</c> </A3> </A2> </A1> <A1> <A2> <A3> <b>8</b> <c>9</c> </A3> </A2> </A1> The result should be: <A1> <A2> <A3> <b>a</b><bx><b>a</b></bx><bx><b>x</b></bx> <c>by</c> </A3> </A2> </A1> <A1> <A2> <A3> <b>1</b><bx><b>1</b></bx><bx><b>18</b></bx> <c>29</c> </A3> </A2> </A1> Could anybody help me with that?!!!! Regards

    Read the article

  • Unresolved compilation problems -- can't use .jar files that I have created

    - by Mike
    I created a few .jar files and am trying to access them in another application - I have tried to use both Eclipse and IntelliJ and experience the same issue: java.lang.Error: Unresolved compilation problems: The import com.XXXX.XXXXXXXXX.project2 cannot be resolved The import com.XXXX.XXXXXXXXX.project2 cannot be resolved BeanFactory cannot be resolved to a type Author cannot be resolved to a type AuthorFactoryImpl cannot be resolved to a type Author cannot be resolved to a type Author cannot be resolved to a type I have been using Maven during this process and the jars compile fine. I have included them on the file path using both the Maven .pom file and directly assigning them. I also have unassigned the direct file path and left the reference in Maven and vise versa -- no difference. See below .jar file class info: file structure: Author.java BeanWithIdentityInterface Books Subject ie: Interface: package com.XXXX.training; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 11/5/12 * Time: 3:16 PM * */ public interface BeanWithIdentityInterface <I> { I getId(); } Author.java: package com.XXXX.training; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 10/25/12 * Time: 12:03 PM */ public class Author implements BeanWithIdentityInterface <Integer>{ private Integer id = null; private String name = null; private String picture = null; private String bio = null; public Author(Integer id, String bio, String name, String picture) { this.id = id; this.bio = bio; this.name = name; this.picture = picture; } public Author (){} @Override public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getPicture() { return picture; } public void setPicture(String picture) { this.picture = picture; } public String getBio() { return bio; } public void setBio(String bio) { this.bio = bio; } @Override public String toString() { return "\n\tAuthor Id: "+this.getId() + " | Bio:"+ this.getBio()+ " | Name:"+ this.getName()+ " | Picture: "+ this.getPicture(); } } implementing Servlet: package com.acentia.training.project3.controller; import com.acentia.training.*; import com.acentia.training.project2.AuthorFactoryImpl; import com.acentia.training.project2.BeanFactory; import javax.servlet.ServletException; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.io.PrintWriter; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 11/11/12 * Time: 6:34 PM * */ public class ListAuthorServlet extends AbstractBaseServlet { private static final long serialVersionUID = -6934109551750492182L; public void doProcess(final HttpServletRequest request, final HttpServletResponse response) throws IOException { final BeanFactory<Author, Integer> authorFactory = new AuthorFactoryImpl(); Author author = null; if (authorFactory != null) { author = (Author) authorFactory.getMember(5); } I can't pull the Author class. Any help would be greatly appreciated.

    Read the article

  • How do I use the 7-zip LZMA SDK 9.x to self-extract?

    - by Christopher
    I am writing a SFX for an installer. I have a number of good reasons for doing this, primarily: The installer is actually a large Python program which uses plugins. Using py2exe or pyinstaller makes doing plugins annoyingly complicated. I want to be able to pass command-line options directly to the Python installer script, as if it were getting run directly. Using the existing 7-zip SFX modules is clunky because I cannot pass command-line options directly into the processes I want to start. I need more flexibility than any of the existing SFX modules I have seen provide. I have already tried using the SDK to open the file, seek to the 7z archive signature, and run the decompression from there. That fails because the SzArEx_Open() call appears to assume that you are starting at a 0 offset in the file. I am using the File_Seek() call to perform the seeking. It seems like there must be a way to do this, since the 7z archive format itself supports multiple embedded streams. Any pointers to examples would be awesome, but narrative explanation is also quite welcome!

    Read the article

  • Fastest way to read data from a lot of ASCII files

    - by Alsenes
    Hi guys, for a college exercise that I've already submitted I needed to read a .txt file wich contained a lot of names of images(1 in each line). Then I needed to open each image as an ascii file, and read their data(images where in ppm format), and do a series of things with them. The things is, I noticed my program was taking 70% of the time in the reading the data from the file part, instead of in the other calculations that I was doing (finding number of repetitions of each pixel with a hash table, finding diferents pixels beetween 2 images etc..), which I found quite odd to say the least. This is how the ppm format looks like: P3 //This value can be ignored when reading the file, because all image will be correctly formatted 4 4 255 //This value can be also ignored, will be always 255. 0 0 0 0 0 0 0 0 0 15 0 15 0 0 0 0 15 7 0 0 0 0 0 0 0 0 0 0 0 0 0 15 7 0 0 0 15 0 15 0 0 0 0 0 0 0 0 0 This is how I was reading the data from the files: ifstream fdatos; fdatos.open(argv[1]); //Open file with the name of all the images const int size = 128; char file[size]; //Where I'll get the image name Image *img; while (fdatos >> file) { //While there's still images anmes left, continue ifstream fimagen; fimagen.open(file); //Open image file img = new Image(fimagen); //Create new image object with it's data file ……… //Rest of the calculations whith that image ……… delete img; //Delete image object after done fimagen.close(); //Close image file after done } fdatos.close(); And inside the image object read the data like this: const int tallafirma = 100; char firma[tallafirma]; fich_in >> std::setw(100) >> firma; // Read the P3 part, can be ignored int maxvalue, numpixels; fich_in >> height >> width >> maxvalue; // Read the next three values numpixels = height*width; datos = new Pixel[numpixels]; int r,g,b; //Don't need to be ints, max value is 256, so an unsigned char would be ok. for (int i=0; i<numpixels; i++) { fich_in >> r >> g >> b; datos[i] = Pixel( r, g ,b); } //This last part is the slow one, //I thing I should be able to read all this data in one single read //to buffer or something which would be stored in an array of unsigned chars, //and then I'd only need to to do: //buffer[0] -> //Pixel 1 - Red data //buffer[1] -> //Pixel 1 - Green data //buffer[2] -> //Pixel 1 - Blue data So, any Ideas? I think I can improve it quite a bit reading all to an array in one single call, I just don't know how that is done. Also, is it posible to know how many images will be in the "index file"? Is it posiible to know the number of lines a file has?(because there's one file name per line..) Thanks!!

    Read the article

  • "undefined reference" error with namespace across multiple files

    - by user1330734
    I've looked at several related posts but no luck with this error. I receive this undefined reference error message below when my namespace exists across multiple files. If I compile only ConsoleTest.cpp with contents of Console.cpp dumped into it the source compiles. I would appreciate any feedback on this issue, thanks in advance. g++ Console.cpp ConsoleTest.cpp -o ConsoleTest.o -Wall /tmp/cc8KfSLh.o: In function `getValueTest()': ConsoleTest.cpp:(.text+0x132): undefined reference to `void Console::getValue<unsigned int>(unsigned int&)' collect2: ld returned 1 exit status Console.h #include <iostream> #include <sstream> #include <string> namespace Console { std::string getLine(); template <typename T> void getValue(T &value); } Console.cpp #include "Console.h" using namespace std; namespace Console { string getLine() { string str; while (true) { cin.clear(); if (cin.eof()) { break; // handle eof (Ctrl-D) gracefully } if (cin.good()) { char next = cin.get(); if (next == '\n') break; str += next; // add character to string } else { cin.clear(); // clear error state string badToken; cin >> badToken; cerr << "Bad input encountered: " << badToken << endl; } } return str; } template <typename T> void getValue(T &value) { string inputStr = Console::getLine(); istringstream strStream(inputStr); strStream >> value; } } ConsoleTest.cpp #include "Console.h" void getLineTest() { std::string str; std::cout << "getLinetest" << std::endl; while (str != "next") { str = Console::getLine(); std::cout << "<string>" << str << "</string>"<< std::endl; } } void getValueTest() { std::cout << "getValueTest" << std::endl; unsigned x = 0; while (x != 12345) { Console::getValue(x); std::cout << "x: " << x << std::endl; } } int main() { getLineTest(); getValueTest(); return 0; }

    Read the article

  • VBScript Multiple folder check if then statement

    - by user2868186
    I had this working before just fine with the exception of getting an error if one of the folders was not there, so I tried to fix it. Searched for a while (as much as I can at work) for a solution and tried different methods, still no luck and my IT tickets are stacking up at work, lol, woohoo. Thanks for any help provided. Getting syntax error on line 60 character 60, thanks again. Option Explicit Dim objFSO, Folder1, Folder2, Folder3, zipFile Dim ShellApp, zip, oFile, CurDate, MacAdd, objWMIService Dim MyTarget, MyHex, MyBinary, i, strComputer, objItem, FormatMAC Dim oShell, oCTF, CurDir, scriptPath, oRegEx, colItems Dim FoldPath1, FoldPath2, FoldPath3, foldPathArray Const FOF_SIMPLEPROGRESS = 256 'Grabs MAC from current machine strComputer = "." Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") Set colItems = objWMIService.ExecQuery _ ("Select * From Win32_NetworkAdapterConfiguration Where IPEnabled = True") For Each objItem in colItems MacAdd = objItem.MACAddress Next 'Finds the pattern of a MAC address then changes it for 'file naming purposes. You can change the FormatMAC line of the code 'in parenthesis where the periods are, to whatever you like 'as long as its within the standard file naming convention Set oRegEx = CreateObject("VBScript.RegExp") oRegEx.Pattern = "([\dA-F]{2}).?([\dA-F]{2}).?([\dA-F]" _ & "{2}).?([\dA-F]{2}).?([\dA-F]{2}).?([\dA-F]{2})" FormatMAC = oRegEx.Replace(MacAdd, "$1.$2.$3.$4.$5.$6") 'Gets current date in a format for file naming 'Periods can be replaced with anything that is standard to 'file naming convention CurDate = Month(Date) & "." & Day(Date) & "." & Year(Date) 'Gets path of the directory where the script is being ran from Set objFSO = CreateObject("Scripting.FileSystemObject") scriptPath = Wscript.ScriptFullName Set oFile = objFSO.GetFile(scriptPath) CurDir = objFSO.GetParentFolderName(oFile) 'where and what the zip file will be called/saved MyTarget = CurDir & "\" & "IRAP_LOGS_" & CurDate & "_" & FormatMAC & ".zip" 'Actual creation of the zip file MyHex = Array(80, 75, 5, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0) For i = 0 To UBound(MyHex) MyBinary = MyBinary & Chr(MyHex(i)) Next Set oShell = CreateObject("WScript.Shell") Set oCTF = objFSO.CreateTextFile(MyTarget, True) oCTF.Write MyBinary oCTF.Close Set oCTF = Nothing wScript.Sleep(3000) folder1 = True folder2 = True folder3 = True 'Adds folders to the zip file created earlier 'change these folders to whatever is needing to be copied into the zip folder 'Folder1 If not objFSO.FolderExists("C:\Windows\Temp\SMSTSLog") and If not objFSO.FolderExists("X:\Windows\Temp\SMSTSLog") then Folder1 = false End If If objFSO.FolderExists("C:\Windows\Temp\SMSTSLog") Then Folder1 = "C:\Windows\Temp\SMSTSLog" Set FoldPath1 = objFSO.getFolder(Folder1) Else Folder1 = "X:\windows\Temp\SMSTSLog" Set FoldPath1 = objFSO.getFolder(Folder1) End If 'Folder2 If not objFSO.FolderExists("C:\Windows\System32\CCM\Logs") and If not objFSO.FolderExists("X:\Windows\System32\CCM\Logs") then Folder2 = false End If If objFSO.FolderEXists("C:\Windows\System32\CCM\Logs") Then Folder2 = "C:\Windows\System32\CCM\Logs" Set FoldPath2 = objFSO.getFolder(Folder2) Else Folder2 = "X:\Windows\System32\CCM\Logs" Set FoldPath2 = objFSO.getFolder(Folder2) End If 'Folder3 If not objFSO.FolderExists("C:\Windows\SysWOW64\CCM\Logs") and If not objFSO.FolderExists("X:\Windows\SysWOW64\CCM\Logs") then Folder3 = false End If If objFSO.FolderExists("C:\Windows\SysWOW64\CCM\Logs") Then Folder3 = "C:\Windows\SysWOW64\CCM\Logs" set FolderPath3 =objFSO.getFolder(Folder3) Else Folder3 = "X:\Windows\SysWOW64\CCM\Logs" Set FoldPath3 = objFSO.getFolder(Folder3) End If set objFSO = CreateObject("Scripting.FileSystemObject") objFSO.OpenTextFile(MyTarget, 2, True).Write "PK" & Chr(5) & Chr(6) _ & String(18, Chr(0)) Set ShellApp = CreateObject("Shell.Application") Set zip = ShellApp.NameSpace(MyTarget) 'checks if files are there before trying to copy 'otherwise it will error out If folder1 = True And FoldPath1.files.Count >= 1 Then zip.CopyHere Folder1 End If WScript.Sleep 3000 If folder2 = true And FoldPath2.files.Count >= 1 Then zip.CopyHere Folder2 End If WScript.Sleep 3000 If folder3 = true And FoldPath3.files.Count >= 1 Then zip.CopyHere Folder3 End If WScript.Sleep 5000 set ShellApp = Nothing set ZipFile = Nothing Set Folder1 = Nothing Set Folder2 = Nothing Set Folder3 = Nothing createobject("wscript.shell").popup "Zip File Created Successfully", 3

    Read the article

  • CloudBerry Online Backup 1.5 for Windows Home Server

    - by The Geek
    Overview CloudBerry Online Backup version 1.5 is a front end application for Amazon S3 storage for backing up your Windows Home Server data. It makes backing up your essential data to Amazon S3 an easy process in the event the disaster strikes. Installation You install the Cloudberry Addin as you do for any addins for Windows Home Server. On a PC on your network, browse to the shared folders on your server and open the Add-Ins folder and copy over WHS_CloudBerryOnlineBackupSetup_v1.5.0.81S3o.msi (link below), then close out of the folder. Next launch the Windows Home Server Console, click Settings, then Add-Ins. Click on the Available tab and click the Install button. It installs very quickly, and when you get the Installation Succeeded dialog click OK. You will lose connection through the Console, just click OK, then reconnect. After reconnecting, you’ll see CloudBerry Backup has been installed, and you can begin using it. You can setup a backup plan right away or find out what’s new with version 1.5. Amazon S3 Account If you don’t already have an Amazon S3 account, you’ll be prompted to create a new one. Click on the Create an account hyperlink, which takes you to the Amazon S3 page where you can sign up. After reviewing the functionality of Amazon S3, click on the Sign Up for Amazon S3 button. Enter in your contact information and accept the Amazon Web Services Customer Agreement. You’re then shown their pricing for storage plans. The amount of storage space you use will depend on your needs. It’s relatively cheap for smaller amounts of data. Just keep in mind the more data you store and download, the more S3 is going to cost. Note: Amazon S3 is introducing Reduced Redundancy Storage which will lower the cost of the data stored on S3. CloudBerry 1.5 will support this new feature. You can find out more about this new pricing structure. Note: Keep in mind that after you first sign up for an Amazon S3 account, it can take up to 24 hours to be authorized. In fact, you may want to sign up for the S3 account before installing the Add-In. After you sign up for your S3 Account, you’ll be given access credentials which you can enter in and create a Storage Bucket name. Features & Use CloudBerry is wizard driven, straight-forward and easy to use. Here we take a look at creating a backup plan. To begin, click on the Setup Backup Plan button to kick off the wizard. Select your backup mode based on the amount of features you want. In our example we’re going to select Advanced Mode as it offers more features than Simple Mode. Select your backup storage account or create a new one. You can select a default account by checking Use currently selected account as default. Now you can go through and select the files and folders you want to backup from your home server. Check the box Show physical drives to get more of a selection of files and folders. This also allows you to backup files from your data drive as well. It has full support for drive extenders so you can backup your shares as well. The cool thing about Cloudberry is it allows you to drill down specific files and folders unlike other WHS backup utilities. Next you can use advanced filters to specify files and/or folders to skip if you want. There are compression and encryption options as well. This will save storage space, bandwidth, and keep your data secure. Purge Options allow you to customize options for getting rid of older files. You can also select the option to delete files from the S3 service that have been deleted locally. Be careful with this option however, as you won’t be able to restore files if you delete them locally. You have some nice scheduling options from running backups manually, specific date and time, or recurring daily, weekly or monthly. Receive email notifications in all cases or when a backup fails. This is a good option so you know if things were successful or something failed, and you need to back it up manually. Email notifications… Give your plan a name… Then if the summary page looks good you can continue, or still go back at this point if something doesn’t look correct and needs adjusting. That’s it! You’re ready to go, and you have an option to start your first backup right away. After you’ve created a backup plan, you can go in and edit, delete, view history, or restore files. Restoring Files using CloudBerry To restore data from your backups kick off the Restore Wizard and select the backup to restore from. You can select the last backup, a specific point in time, or manually browse through the files. Browse through the directory and select the files you need to restore. Choose the destination to restore the files to. You can select from the original location, a specific location, to overwrite existing files, or set the location as the default for future restores. If the files are encrypted, enter in the correct passwords. If the summary looks good, click on Next to start the restore process. You’ll be shown a progress bar at the bottom of the screen while the files are restored. After the process has completed, close out of the Restore Wizard. In this example we restored a couple of music files to the desktop of Windows Home Server… But as shown above you can save them to the original location, other network locations, or WHS shared folders. This can make it a lot easier to keep track of files you’ve restored. You can also access different options for CloudBerry by clicking Settings in WHS Console then CloudBerry Backup. Here you can set up a new storage account, check for updates, app options, Diagnostics, and send feedback. Under Options there are several settings you can tweak to get the best experience for your WHS backups. CloudBerry Web Interface Another nice feature is the CloudBerry Web Interface so you can access your data from anywhere you have an Internet connection. To check it out in WHS Console, click on the Backup Web Interface link…you’ll probably want to bookmark the link in your favorite browser. Note: This feature is still in beta and at the time of this review, the Web Interface wasn’t up and running so we weren’t able to test it out. Performance The Cloudberry app works very well through the Windows Home Server Console. The amount of time it takes to backup or restore your data will depend on the speed of your Internet connection and size of the files. In our tests, backing up 1GB of data to the Amazon S3 account took around an hour, but we were running it on a DSL with limited upload speeds so your mileage will vary. Product Support In our experience, the team at CloudBerry offered great support in a timely manner when contacting them. You can fill out a help request through a form on their website and they also have a community forum. Conclusion We were very pleased with CloudBerry Online Backup for WHS. It’s wizard driven interface makes it extremely easy to use, and offers comprehensive backup choices for your Amazon S3 account. CloudBerry will only backup files that have been modified, so if files haven’t been changed, they won’t be backed up again.They offer a free 15 day trial and is $29.99 after that for a full license. Once you buy the app you own it, and charges to your S3 account will vary depending on the amount of data you upload. If you’re looking for an effective and easy to use front end application to backup your Windows Home Server data to your Amazon S3 account, CloudBerry is a recommended affordable choice. Download CloudBerry for Windows Home Server Sign Up For Amazon S3 Account Rating Installation: 9 Ease of Use: 8 Features: 8 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerBackup Windows Home Server Folders to an External Hard DriveBackup Your Windows Home Server Off-Site with Asus WebstorageRemove a Network Computer from Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Spring ResourceServlet throws too many open files exception in jetty and tomcat under linux

    - by atomsfat
    I was running the petclinic example that was created with spring roo, also I test booking-mvc example that comes whit spring webflow 2.0.9 and the same happens, this is when I reload the main page many times. If I remove the lines from both examples there is no error. < spring:theme code="styleSheet" var="theme_css"/> <spring:url value="/${theme_css}" var="theme_css_url"/> <spring:url value="/resources/dojo/dojo.js" var="dojo_url"/> <spring:url value="/resources/dijit/themes/tundra/tundra.css" var="tundra_url"/> <spring:url value="/resources/spring/Spring.js" var="spring_url"/> <spring:url value="/resources/spring/Spring-Dojo.js" var="spring_dojo_url"/> <spring:url value="/static/images/favicon.ico" var="favicon" /> <link rel="stylesheet" type="text/css" media="screen" href="${theme_css_url}"><!-- //required for FF3 and Opera --></link> <link rel="stylesheet" type="text/css" href="${tundra_url}"><!-- //required for FF3 and Opera --></link> <link rel="SHORTCUT ICON" href="${favicon}" /> <script src="${dojo_url}" type="text/javascript" ><!-- //required for FF3 and Opera --></script> <script src="${spring_url}" type="text/javascript"><!-- //required for FF3 and Opera --></script> <script src="${spring_dojo_url}" type="text/javascript"><!-- //required for FF3 and Opera --></script> <script language="JavaScript" type="text/javascript">dojo.require("dojo.parser");</script> So I can deduce that this is something related with this servlet <servlet> <servlet-name>Resource Servlet</servlet-name> <servlet-class>org.springframework.js.resource.ResourceServlet</servlet-class> </servlet> <!-- Map all /resources requests to the Resource Servlet for handling --> <servlet-mapping> <servlet-name>Resource Servlet</servlet-name> <url-pattern>/resources/*</url-pattern> </servlet-mapping> Running the example injetty 6.1.10, tomcat 1.6, in fedora 12 with java 1.6.20, make errors. but in aix and websphere no errors, and tomcat 1.6 and windows no errors, I think that this is something related with linux. STACKTRACE 2010-05-21 12:53:07.733::WARN: Nested in org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.apache.tiles.impl.CannotRenderException: ServletException including path '/WEB-INF/layouts/default.jspx'.: org.apache.tiles.impl.CannotRenderException: ServletException including path '/WEB-INF/layouts/default.jspx'. at org.apache.tiles.impl.BasicTilesContainer.render(BasicTilesContainer.java:691) at org.apache.tiles.impl.BasicTilesContainer.render(BasicTilesContainer.java:643) at org.apache.tiles.impl.BasicTilesContainer.render(BasicTilesContainer.java:626) at org.apache.tiles.impl.BasicTilesContainer.render(BasicTilesContainer.java:322) at org.springframework.web.servlet.view.tiles2.TilesView.renderMergedOutputModel(TilesView.java:100) at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:250) at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1060) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:798) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:716) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:647) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:552) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:726) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:285) at org.mortbay.jetty.servlet.Dispatcher.error(Dispatcher.java:135) at org.mortbay.jetty.servlet.ErrorPageErrorHandler.handle(ErrorPageErrorHandler.java:121) at org.mortbay.jetty.Response.sendError(Response.java:274) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:429) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:726) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:206) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:829) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:514) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) Caused by: org.apache.tiles.util.TilesIOException: ServletException including path '/WEB-INF/layouts/default.jspx'. at org.apache.tiles.servlet.context.ServletUtil.wrapServletException(ServletUtil.java:232) at org.apache.tiles.servlet.context.ServletTilesRequestContext.forward(ServletTilesRequestContext.java:243) at org.apache.tiles.servlet.context.ServletTilesRequestContext.dispatch(ServletTilesRequestContext.java:222) at org.apache.tiles.renderer.impl.TemplateAttributeRenderer.write(TemplateAttributeRenderer.java:44) at org.apache.tiles.renderer.impl.AbstractBaseAttributeRenderer.render(AbstractBaseAttributeRenderer.java:103) at org.apache.tiles.impl.BasicTilesContainer.render(BasicTilesContainer.java:669) at org.apache.tiles.impl.BasicTilesContainer.render(BasicTilesContainer.java:689) ... 38 more Caused by: java.io.FileNotFoundException: /home/tsalazar/Workspace/test/roo_clinic/src/main/webapp/WEB-INF/web.xml (Too many open files) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:106) at java.io.FileInputStream.<init>(FileInputStream.java:66) at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70) at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161) at java.net.URL.openStream(URL.java:1010) at org.apache.jasper.compiler.JspConfig.processWebDotXml(JspConfig.java:114) at org.apache.jasper.compiler.JspConfig.init(JspConfig.java:295) at org.apache.jasper.compiler.JspConfig.findJspProperty(JspConfig.java:360) at org.apache.jasper.compiler.Compiler.generateJava(Compiler.java:141) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:409) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:592) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:344) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:470) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:364) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:726) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:285) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.apache.tiles.servlet.context.ServletTilesRequestContext.forward(ServletTilesRequestContext.java:241) ... 43 more

    Read the article

  • How to configure ubuntu ldap client to get password policies from server?

    - by Rafaeldv
    I have a ldap server on CentOS, 389-ds. I configured the client, ubuntu 12.04, to authenticate on that base and it works very well. But it don't gets the password policies from server. For example, if i set the policy to force user to change the password on first login, ubuntu ignores it and logs him in, always. How can i setup the client to get the policies? Here are the client files: /etc/nsswitch.conf passwd: files ldap group: files ldap shadow: files ldap hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis sudoers: ldap files common-auth auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=1 default=ignore] pam_ldap.so use_first_pass auth requisite pam_deny.so auth required pam_permit.so auth optional pam_cap.so common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 default=ignore] pam_ldap.so account requisite pam_deny.so account required pam_permit.so common-password password requisite pam_cracklib.so retry=3 minlen=8 difok=3 password [success=2 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok try_first_pass password requisite pam_deny.so password required pam_permit.so password optional pam_gnome_keyring.so common-session session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_umask.so session required pam_unix.so session optional pam_ldap.so session optional pam_ck_connector.so nox11 session optional pam_mkhomedir.so skel=/etc/skel umask=0022 /etc/ldap.conf base dc=a,dc=b,dc=c uri ldaps://a.b.c/ ldap_version 3 rootbinddn cn=directory manager pam_password md5 sudoers_base ou=SUDOers,dc=a,dc=b,dc=c pam_lookup_policy yes pam_check_host_attr yes nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,colord,daemon,games,gnats,hplip,irc,kernoops,libuuid,lightdm,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,sshd,sync,sys,syslog,usbmux,uucp,whoopsie,www-data /etc/ldap/ldap.conf BASE dc=a,dc=b,dc=c URI ldaps://a.b.c/ ssl on use_sasl no tls_checkpeer no sudoers_base ou=SUDOers,dc=a,dc=b,dc=c sudoers_debug 2 pam_lookup_policy yes pam_check_host_attr yes pam_lookup_policy yes pam_check_host_attr yes TLS_CACERT /etc/ssl/certs/ca-certificates.crt TLS_REQCERT never

    Read the article

  • Warning: Cannot modify header information - headers already sent

    - by Pankaj Khurana
    Hi, I am using code available on http://www.forosdelweb.com/f18/zip-lib-php-archivo-zip-vacio-431133/ for creating zip file. First file-zip.lib.php <?php /* $Id: zip.lib.php,v 1.1 2004/02/14 15:21:18 anoncvs_tusedb Exp $ */ // vim: expandtab sw=4 ts=4 sts=4: /** * Zip file creation class. * Makes zip files. * * Last Modification and Extension By : * * Hasin Hayder * HomePage : www.hasinme.info * Email : [email protected] * IDE : PHP Designer 2005 * * * Originally Based on : * * http://www.zend.com/codex.php?id=535&single=1 * By Eric Mueller <[email protected]> * * http://www.zend.com/codex.php?id=470&single=1 * by Denis125 <[email protected]> * * a patch from Peter Listiak <[email protected]> for last modified * date and time of the compressed file * * Official ZIP file format: http://www.pkware.com/appnote.txt * * @access public */ class zipfile { /** * Array to store compressed data * * @var array $datasec */ var $datasec = array(); /** * Central directory * * @var array $ctrl_dir */ var $ctrl_dir = array(); /** * End of central directory record * * @var string $eof_ctrl_dir */ var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; /** * Last offset position * * @var integer $old_offset */ var $old_offset = 0; /** * Converts an Unix timestamp to a four byte DOS date and time format (date * in high two bytes, time in low two bytes allowing magnitude comparison). * * @param integer the current Unix timestamp * * @return integer the current date in a four byte DOS format * * @access private */ function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method /** * Adds "file" to archive * * @param string file contents * @param string name of the file in the archive (may contains the path) * @param integer the current timestamp * * @access public */ function addFile($data, $name, $time = 0) { $name = str_replace('', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = 'x' . $dtime[6] . $dtime[7] . 'x' . $dtime[4] . $dtime[5] . 'x' . $dtime[2] . $dtime[3] . 'x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method /** * Dumps out file * * @return string the zipped file * * @access public */ function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method /** * A Wrapper of original addFile Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param array An Array of files with relative/absolute path to be added in Zip File * * @access public */ function addFiles($files /*Only Pass Array*/) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } /** * A Wrapper of original file Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param string Output file name * * @access public */ function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> My second file newzip.php <? include("zip.lib.php"); $ziper = new zipfile(); $ziper->addFiles(array("index.htm")); //array of files // the next three lines force an immediate download of the zip file: header("Content-type: application/octet-stream"); header("Content-disposition: attachment; filename=test.zip"); echo $ziper -> file(); ?> I am getting this warning while executing newzip.php Warning: Cannot modify header information - headers already sent by (output started at E:\xampp\htdocs\demo\zip.lib.php:233) in E:\xampp\htdocs\demo\newzip.php on line 6 I am unable to figure out the reason for the same. Please help me on this. Thanks

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • How to change file managers "files" icon to nice "nautilus shell" icon in Ubuntu 12.04?

    - by Ivan Ivanov
    I've attached Nautilus to Unity launcher in Ubuntu 12.04. In Ubuntu 12.04 the default "Files" icon for Nautilus file manager is not as pretty as a "Nautilus Shell" icon. But when I search in Dash for Nautilus it gives "Files" link to Nautilus, which has grey and quite plain file-cabinet icon. How do I change this grey file cabinet icon to the Nautilus' shell icon, which suits well to my icon theme?

    Read the article

  • Why do I get a "the location is not a folder" error when trying to open files using Dash or Synapse?

    - by Christian Howd
    Within the last few days, I've encountered errors when trying to open files using Unity Dash, Synapse, or even the Gnome Search Tool. These methods will let me launch applications and folders, but not files of any time, including mp3, doc, odt, and txt. With any method, the same error dialogue results: "the location is not a folder". Is there something I can do on my end to correct this, or is this a bug in Natty that is still being corrected?

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >