Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 825/1620 | < Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >

  • C++ compiler unable to find function (namespace related)

    - by CS student
    I'm working in Visual Studio 2008 on a C++ programming assignment. We were supplied with files that define the following namespace hierarchy (the names are just for the sake of this post, I know "namespace XYZ-NAMESPACE" is redundant): (MAIN-NAMESPACE){ a bunch of functions/classes I need to implement... (EXCEPTIONS-NAMESPACE){ a bunch of exceptions } (POINTER-COLLECTIONS-NAMESPACE){ Set and LinkedList classes, plus iterators } } The MAIN-NAMESPACE contents are split between a bunch of files, and for some reason which I don't understand the operator<< for both Set and LinkedList is entirely outside of the MAIN-NAMESPACE (but within Set and LinkedList's header file). Here's the Set version: template<typename T> std::ostream& operator<<(std::ostream& os, const MAIN-NAMESPACE::POINTER-COLLECTIONS-NAMESPACE::Set<T>& set) Now here's the problem: I have the following data structure: Set A Set B Set C double num It's defined to be in a class within MAIN-NAMESPACE. When I create an instance of the class, and try to print one of the sets, it tells me that: error C2679: binary '<<' : no operator found which takes a right-hand operand of type 'const MAIN-NAMESPACE::POINTER-COLLECTIONS-NAMESPACE::Set' (or there is no acceptable conversion) However, if I just write a main() function, and create Set A, fill it up, and use the operator- it works. Any idea what is the problem? (note: I tried any combination of using and include I could think of).

    Read the article

  • Unable to access SQL reporting services on shared site with Themes enabled

    - by Grant
    Hi, i am having some trouble with my IIS web server & SQL reporting services. At the current time my site is playing host to both reporting services (/reports & /reportserver) as well as my personal website (domain.com) Only just recently have i implemented a Theme on my site and as such i have placed a statement in my web.config file directing it to apply a certain theme in the following manner <pages styleSheetTheme="General"> Because of this when i try to access the report pages it failed telling me it couldnt find the Theme so what i did was locate the source files for the /reports & /reportserver directories and place the App_Theme folder in them hoping that would sort everything out. What i am getting now is the following error *Using themed css files requires a header control on the page. e.g. head runat="server" * Does anyone know how i can get around this? Do i have to hack the sql reporting aspx pages? Please note i do NOT want to remove the web.config declaration.

    Read the article

  • Saving ntext data from SQL Server to file directory using asp

    - by April
    A variety of files (pdf, images, etc.) are stored in a ntext field on a MS SQL Server. I am not sure what type is in this field, other than it shows question marks and undefined characters, I am assuming they are binary type. The script is supposed to iterate through the rows and extract and save these files to a temp directory. "filename" and "contenttype" are given, and "data" is whatever is in the ntext field. I have tried several solutions: 1) data.SaveToFile "/temp/"&filename, 2 Error: Object required: '????????????????????' ??? 2) File.WriteAllBytes "/temp/"&filename, data Error: Object required: 'File' I have no idea how to import this, or the Server for MapPath. (Cue: what a noob!) 3) Const adTypeBinary = 1 Const adSaveCreateOverWrite = 2 Dim BinaryStream Set BinaryStream = CreateObject("ADODB.Stream") BinaryStream.Type = adTypeBinary BinaryStream.Open BinaryStream.Write data BinaryStream.SaveToFile "C:\temp\" & filename, adSaveCreateOverWrite Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another. 4) Response.ContentType = contenttype Response.AddHeader "content-disposition","attachment;" & filename Response.BinaryWrite data response.end This works, but the file should be saving to the server instead of popping up save-as dialog. I am not sure if there is a way to save the response to file. Thanks for shedding light on any of these problems!

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • Loading JNI lib on Mac OS X?

    - by Clinton
    Background So I am attempting to load a jnilib (specifically JOGL) into Java on Mac OS X at runtime. I have been following along the relevant Stack Overflow questions: Maven and the JOGL Library Loading DLL in Java - Eclipse - JNI How to make a jar file that include all jar files The end goal for me is to package platform specific JOGL files into a JAR and unzip them into a temp directory and load them at start-up. I worked my problem back to simply attempting to load JOGL using hard-coded paths: File f = new File("/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/libjogl.jnilib"); System.load(f.toString()); f = new File ("/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/libjogl_awt.jnilib"); System.load(f.toString()); I get the following exception when attempting to use the JOGL API: Exception in thread "main" java.lang.UnsatisfiedLinkError: no jogl in java.library.path But when I specify java.library.path by adding the following JVM option: -Djava.library.path="/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/" Everything works fine. Question Is it possible use System.load (or some other variant) on Mac OS X as a replacement for -Djava.library.path that is invoked at runtime?

    Read the article

  • open current page in new window including query string

    - by Hatch
    First of all, I am a total dud at all things related to web developement, so please bear with me here. I suspect this question is laughable for the web guys, but unfortunately I can't figure this out. Here goes: I have an application, that does some processing, writes some result files and then displays the results in an embedded IE browser control. This is done by navigating the browser control to a local html file together with a query string containing the generated result files to display it all. The link target would look something like: c:\SomeFolder\results.htm?results=file%201.xml;file%202.xml;file%203.xml So far, everything's fine. However, in the html page is a href that is suppossed to open up the exact same just in a normal browser window. What I thought would work is: <a href="#" target="_blank">Show in browser</a> Since it is a link in an html page displayed in an IE control, the link will open up in IE no matter what the default browser might be. This works for IE7 and 8, but not for IE6. With IE6 the query string gets cut off and the browser opens file://c:/results/results.htm# without the query string. I am sure there must be a much better way to do this without the # and which would work in all IEs. How would the pros solve this?

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • How to prevent the other threads from accessing a method when one thread is accessing a method?

    - by geeta
    I want to search for a string in 10 files and write the matching lines to a single file. I wrote the matching lines from each file to 10 output files(o/p file1,o/p file2...) and then copied those to a single file using 10 threads. But the output single file has mixed output(one line from o/p file1,another line from o/p file 2 etc...) because its accessed simultaneously by many threads. If I wait for all threads to complete and then write the single file it will be much slower. I want the output file to be written by one thread at a time. What should i do? My source code:(only writing to single file method) public void WriteSingle(File output_file,File final_output) throws IOException { synchronized(output_file){ System.out.println("Writing Single file"); FileOutputStream fo = new FileOutputStream(final_output,true); FileChannel fi = fo.getChannel(); FileInputStream fs = new FileInputStream(output_file); FileChannel fc = fs.getChannel(); int maxCount = (64 * 1024 * 1024) - (32 * 1024); long size = fc.size(); long position = 0; while (position < size) { position += fc.transferTo(position, maxCount, fi); } } }

    Read the article

  • one key multiple values from different sources c#

    - by user2964034
    I am trying to make a c# program that will compare two files for me, and tell me the differences of specific parts. I have been able to get the parts I need into variables while looping through, but I now want to add these to a key with 3 values per file, so a key with 6 values overall which I will then compare to eachother later on. But I can only add 3 values at a time using the loop I have, so I need to be able to add the last 3 values to the key without overwriting the first 3. example of data from file: [\Advanced\Rules\Correlation Rules\Suspect_portscan\]; CheckDescription =S Detect Port scans; Enabled =B 0; Priority =L 3; I have managed to get what I need into variables so I have: string SigName would be "Suspect_portscan" Int Enabled, Priority, Blocking as 0 3 and null respectivly. I then want to make a dictionary type thing, with a key which would be the SigName and the first 3 values as enabled, priority, blocking. Then when looping through the second file, I want to add the 2nd files settings for the enabled, priority, blocking for the same SigName (so to the key) in the last 3 value slots. I will then compare this against itself, like 'if signame(0) != signame(3)' so if file 1 enabled is not the same as file two enabled make a note and tell me. But the problem I have is not being able to get the data into a dictionary or lookup, I'm completely stumped. It seems like I should use a dictionary with a list for the values but I cant get it working on the second loop through. Thanks.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • libarchive reads too many chars when extracting a file

    - by ojreadmore
    I've written a C program to extract files from a tar archive using libarchive. I'd like to extract a file from this archive and print it to standard output. But I get extra characters. It's garbage, but it's from another file (possibly adjacent to it in the archive.) I expect output to end at </html>. Here is the code that reads this tar file. libarchive 2.8.3 compiled on mac os X 10.6.3. gcc 4.2 x86_64 ls -l vendar-definition.html gives me 1921 for the file size. And so shows tar tfv 0000.tar | grep vendar-definition.html. So reports the C output that states the file size. To me this seems correct. Two possibilities I can see for why my output is not as expected: 1. I've made a beginner's mistake or 2. multibyte characters in the archive files has something to do with it.

    Read the article

  • Type errors when using same name

    - by lykimq
    I have 3 files: 1) cpf0.ml type string = char list type url = string type var = string type name = string type symbol = | Symbol_name of name 2) problem.ml: type symbol = | Ident of string 3) test.ml open Problem;; open Cpf0;; let symbol b = function | Symbol_name n -> Ident n When I combine test.ml: ocamlc -c test.ml. I received an error: This expression has type Cpf0.name = char list but an expression was expected of type string Could you please help me to correct it? Thank you very much EDIT: Thank you for your answer. I want to explain more about these 3 files: Because I am working with extraction in Coq to Ocaml type: cpf0.ml is generated from cpf.v : Require Import String. Definition string := string. Definition name := string. Inductive symbol := | Symbol_name : name -> symbol. The code extraction.v: Set Extraction Optimize. Extraction Language Ocaml. Require ExtrOcamlBasic ExtrOcamlString. Extraction Blacklist cpf list. where ExtrOcamlString I opened: open Cpf0;; in problem.ml, and I got a new problem because in problem.ml they have another definition for type string This expression has type Cpf0.string = char list but an expression was expected of type Util.StrSet.elt = string Here is a definition in util.ml defined type string: module Str = struct type t = string end;; module StrOrd = Ord.Make (Str);; module StrSet = Set.Make (StrOrd);; module StrMap = Map.Make (StrOrd);; let set_add_chk x s = if StrSet.mem x s then failwith (x ^ " already declared") else StrSet.add x s;; I was trying to change t = string to t = char list, but if I do that I have to change a lot of function it depend on (for example: set_add_chk above). Could you please give me a good idea? how I would do in this case.

    Read the article

  • PHP: How do I rename a directory where the parent directory is variable?

    - by gsquare567
    i would like to move the files inside uploads/pension/#SOME_VARIABLE_NUMBER#/#SOME_CONSTANT_NUMBER#/ here is my code: // move pension statements // located at uploads/pension/%COMPANY_ID%/%USER_ID%/%HASH% // so just move the %USER_ID% folder to the new company $oldPensionDir = "uploads/pension/" . $demo_user[Users::companyID] . "/" . $demo_user[Users::userID] . "/"; $newPensionDir = "uploads/pension/" . $newCompanyID . "/" . $demo_user[Users::userID] . "/"; // see if the user had any files, and if so, move them if(file_exists($oldPensionDir)) { // if it doesnt exist, make it if(!file_exists($newPensionDir)) mkdir($newPensionDir); // move the folder rename($oldPensionDir, $newPensionDir); } however... when i need to make the directory with the "mkdir" function, i get: mkdir() [<a href='function.mkdir'>function.mkdir</a>]: No such file or directory ok, maybe the mkdir won't work, but what about the rename? perhaps that will make the directory if it's not there... nope! rename(uploads/pension/1001/783/,uploads/pension/1000/783/) [<a href='function.rename'>function.rename</a>]: The system cannot find the path specified. (code: 3) so, there are two errors. i'm pretty sure if the renaming works, i won't even need the mkdir, but who knows... can anyone tell me why these are errors and how to fix them? thanks!

    Read the article

  • Import CSV to class structure as the user defines

    - by Assimilater
    I have a contact manager program and I would like to offer the feature to import csv files. The problem is that different data sources order the fields in different ways. I thought of programming an interface for the user to tell it the field order and how to handle exceptions. Here is an example line in one of many possible field orders: "ID#","Name","Rank","Address1","Address2","City","State","Country","Zip","Phone#","Email","Join Date","Sponsor ID","Sponsor Name" "Z1234","Call, Anson","STU","1234 E. 6578 S.","","Somecity","TX","United States","012345","000-000-0000","[email protected]","5/24/2010","z12343","Quantum Independence" Notice that in one data field "Name" there is a comma to separate last name and first name and in another there is not. My plan is to have a line for each field (ie ID, Name, City etc.) and a statement "import to" and list box with options like: Don't Import, BusinessJoin Date, First Name, Zip and the program recognizes those as properties of an object... I'd also like the user to be able to record preset field orders so they can re-use them for csv files from the same download source. Then I also need it to check if a record all ready exists (is there a record for Anson Call all ready?) and allow the user to tell it what to do if there is a record (ie mailing address may have changes, so if that field is filled overwrite it, or this mailing address is invalid, leave the current data untouched for this person, overwrite the rest). While I'm capable of coding this...i'm not very excited about it and I'm wondering if there's a tool or set of tools out there to all ready perform most of this functionality... I hope this makes sense...

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • Managing Many to Many relationships in asp.net Wizard Control

    - by Luis
    Say I have this entity with a lot of attributes. In the input form I have decided to implement a wizard control so I can collect information about this entity in several steps. The problem is that I need to collect information that has been modeled has many to many relationships. I am planning to use a telerik gridview to manage this (add/edit/delete), the problem is where do I store that data since the entity in a insert form is not created on the database yet. OK so I can store all that info in temporary lists residing in the viewstate, waiting for the final submit where I dump all that in the DB, but one of the steps I am collecting files...now storing files in the viewstate is out of the question, same as as storing them in the session... I have been thinking of implementing in a way that the user has to submit some info first (say first 3 steps), commit the data to the database creating the parent entity and then start inserting all the childs entities...but this will get weird as it's confusing since on the first steps you not saving the data to the DB and on the next ones you are commiting directly... Anyone has any thoughts on this? Thanks

    Read the article

  • PHP Include and sort by variable within file

    - by Jason Hoax
    I have written this PHP include-script but now I'm trying to sort the included files out by variables WITHIN the included php's. In other words, in each included PHP file there is a rating, now I want the ratings to be read so that when they are included they will be sorted out from highest to lowest. (scores are like 6.0 to 9.0) Kind Regards! $location = 'experiments/visualizations'; foreach (glob("$location/*.php") as $filename) { include $filename; } The included files are named randomly like: File1: $filename = "AAAA"; $projecttitle = "Project Name"; $description = "This totally explains the product"; $score = "7.6"; File 2: $filename = "BBBB"; $projecttitle = "Project Name2" $description = "This totally explains the product"; $score = "9.6"; As you can see 9.6 is higher than 7.6 but PHP sorts the includes out by name instead of variables within the file. I tried sorting, but I can't get it fixed. Help!

    Read the article

  • How to write a simple Lexer/Parser with antlr 2.7?

    - by Burkhard
    Hello, I have a complex grammar (in antlr 2.7) which I need to extend. Having never used antlr before, I wanted to write a very simple Lexer and Parser first. I found a very good explanation for antlr3 and tried to adapt it: header{ #include <iostream> using namespace std; } options { language="Cpp"; } class P2 extends Parser; /* This will be the entry point of our parser. */ eval : additionExp ; /* Addition and subtraction have the lowest precedence. */ additionExp : multiplyExp ( "+" multiplyExp | "-" multiplyExp )* ; /* Multiplication and addition have a higher precedence. */ multiplyExp : atomExp ( "*" atomExp | "/" atomExp )* ; /* An expression atom is the smallest part of an expression: a number. Or when we encounter parenthesis, we're making a recursive call back to the rule 'additionExp'. As you can see, an 'atomExp' has the highest precedence. */ atomExp : Number | "(" additionExp ")" ; /* A number: can be an integer value, or a decimal value */ number : ("0".."9")+ ("." ("0".."9")+)? ; /* We're going to ignore all white space characters */ protected ws : (" " | "\t" | "\r" | "\n") { newline(); } ; It does generate four files without errors: P2.cpp, P2.hpp, P2TokenTypes.hpp and P2TokenTypes.txt. But now what? How do I create a working programm with that? I tried to add these files to a VS2005-WinConsole-Project but it does not compile: p2.cpp(277) : fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source?

    Read the article

  • What is the right path for PHP includes on a Mac?

    - by skorned
    Running Mac OS X 10.5.8, with PHP 5.2.11 Pre-installed. Using Coda 1.6.10. I'm writing PHP files, and then preview them running from file, not server. This was working fine till I tried PHP includes. These don't work as a relative path, only as an absolute from the root of the drive. Is there any way I can use statements like include_once "common/header.php"; without specifying my entire file path like so : include_once "/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0/common/base.php"; ,where ColoredLists_v1.0 is the directory with all the website files in it. I tried solutions like prepending _SERVER[DOCUMENT_ROOT] or dirname(File) to the file paths, but that didn't work as the variables were not set. Is there any easy way to do this, or a configuration I can change so that it looks in a specific directory by default instead of looking at the drive root? Currently, echo_include_path shows .: When I include this line at the start of the script, it works: set_include_path('/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0'); However, if I want to do this for all my scripts, I can't seem to make the change permanent. Even after I edited the Unix include_path in my php.ini, it doesn't seem to work.

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • folder structure in a mercurial repo?

    - by ajsie
    I have just switched from svn to mercurial and have read some tutorials about it. I've still got some confusions that i hope you could help me to sort out. I wonder if I have understood the folder structure in a mercurial repo right. In a svn repo I usually have these folders: svn: branches (branches/chat, branches/new_login etc) tags (version1.0, version2.0 etc) sandbox trunk Should a branch actually be another clone of the original/central repo in mercurial? it seemed like that when I read the manual. And a tag is just a named identifier, but you should clone the original/central repo whenever you want to create a tag? How about the sandbox? should that be another clone too? So basically you just have in a repo all the folders/files that you would have in the trunk folder? mercurial: central repo: projects folders/files (not in any parentfolder) tag repo: cloned from central repo at a given moment for release (version1.0, version2.0 etc) branch repo: cloned from central repo for adding features (chat, new_login etc) sandbox repo: experimental repo (could be pushed to central repo, or just deleted) is this correct?

    Read the article

  • Problem Routing domains subfolder

    - by hkda150
    Hi there, I'm pretty new to ASP.NET MVC and I hope it is not a too silly question. So here it comes. I have ... a ASP.NET MVC application with a domain similar to http://mydomain/mysubfoler1/myappfolder My problem... The problem for me is the routing of my application (it worked fine without using a subfolder after the domain-name). The applications homepage loads not to bad, with css files but without ressources like images (defined in css files) and without jQuery ajax calls similar to /mycontroller/myaction links are only working once (the second time I get a page similar to this link: http://mydomain/mysubfoler1/myappfolder/myController/myController/myAction) Here's my Global.asax contaning the routing: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "myController", action = "Index", id = "" } defaults ); routes.MapRoute( "Root", "", new { controller = "myController", action = "Index", id = "" } ); } protected void Application_Start() { ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new MyApplicationWeb.LocalizationWebFormViewEngine()); RegisterRoutes(RouteTable.Routes); //RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any suggestions? My first suggestion was to use areas like: "mysubfolder1/myappfolder/{controller}/{action}/{id}" (but without any luck) Thank you very much for your help!

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Checking when two headers are included at the same time.

    - by fortran
    Hi, I need to do an assertion based on two related macro preprocessor #define's declared in different header files... The codebase is huge and it would be nice if I could find a place to put the assertion where the two headers are already included, to avoid polluting namespaces unnecessarily. Checking just that a file includes both explicitly might not suffice, as one (or both) of them might be included in an upper level of a nesting include's hierarchy. I know it wouldn't be too hard to write an script to check that, but if there's already a tool that does the job, the better. Example: file foo.h #define FOO 0xf file bar.h #define BAR 0x1e I need to put somewhere (it doesn't matter a lot where) something like this: #if (2*FOO) != BAR #error "foo is not twice bar" #endif Yes, I know the example is silly, as they could be replaced so one is derived from the other, but let's say that the includes can be generated from different places not under my control and I just need to check that they match at compile time... And I don't want to just add one include after the other, as it might conflict with previous code that I haven't written, so that's why I would like to find a file where both are already present. In brief: how can I find a file that includes (direct or indirectly) two other files? Thanks!

    Read the article

  • php code in FTP to backup db using URL

    - by Giom
    I fund and inserted this code in my FTP in a backup.php that works great but it copy the file into the root folder (homez/) 1- I would like to put these backup files in a homez/backupDB any idea where to put this path? 2- the db backup files (DB.sql.bz2) have always the same name, is it possible to name each one with the date of creation? (I launch this php with the URL link) <? echo "Votre base est en cours de sauvegarde....... "; $db="nom_de_ma_base"; $status=system("mysqldump --host=mysql5-1.perso --user=$_POST[login] --password=$_POST[password] $db > ../$db.sql"); echo $status; echo "Compression du fichier..... "; system("bzip2 -f ../$db.sql"); echo "C'est fini. Vous pouvez récupérer la base par FTP \n "; ?> Thanks!

    Read the article

< Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >