Search Results

Search found 23480 results on 940 pages for 'directory structure'.

Page 136/940 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • htaccess mod_rewrite check file/directory existence, else rewrite?

    - by devians
    I have a very heavy htaccess mod_rewrite file that runs my application. As we sometimes take over legacy websites, I sometimes need to support old urls to old files, where my application processes everything post htaccess. My ultimate goal is to have a 'Demilitarized Zone' for old file structures, and use mod rewrite to check for existence there before pushing to the application. This is pretty easy to do with files, by using: RewriteCond %{IS_SUBREQ} true RewriteRule .* - [L] RewriteCond %{ENV:REDIRECT_STATUS} 200 RewriteRule .* - [L] RewriteCond Public/DMZ/$1 -F [OR] RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] This allows pseudo support for relative urls by not hardcoding my base path (I cant assume I will ever be deployed in document root) anywhere and using subrequests to check for file existence. Works fine if you know the file name, ie http://domain.com/path/to/app/legacyfolder/index.html However, my legacy urls are typically http://domain.com/path/to/app/legacyfolder/ Mod_Rewrite will allow me to check for this by using -d, but it needs the complete path to the directory, ie RewriteCond Public/DMZ/$1 -F [OR] RewriteCond /var/www/path/to/app/Public/DMZ/$1 -d RewriteRule ^(.*)$ Public/DMZ/$1 [QSA,L] I want to avoid the hardcoded base path. I can see one possible solutions here, somehow determining my path and attaching it to a variable [E=name:var] and using it in the condition. Another option is using -U, but the tricky part is stopping it from hijacking every other request when they should flow through, since -U is really easy to satisfy. Any implementation that allows me to existence check a directory is more than welcome. I am not interested in using RewriteBase, as that requires my htaccess to have a hardcoded base path.

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • XML Serialize and Deserialize Problem XML Structure

    - by Ph.E
    Camarades, I'm having the following problem. Caught a list Struct, Serialize (Valid W3C) and send to a WebService. In the WebService I receive, transform to a string, valid by the W3C and then Deserializer, but when I try to run it, always occurs error, saying that some objects were not closed. Any help? Sent Code: #region ListToXML private XmlDocument ListToXMLDocument(object __Lista) { XmlDocument _ListToXMLDocument = new XmlDocument(); try { XmlDocument _XMLDoc = new XmlDocument(); MemoryStream _StreamMem = new MemoryStream(); XmlSerializer _XMLSerial = new XmlSerializer(__Lista.GetType()); StreamWriter _StreamWriter = new StreamWriter(_StreamMem, Encoding.UTF8); _XMLSerial.Serialize(_StreamWriter, __Lista); _StreamMem.Position = 0; _XMLDoc.Load(_StreamMem); if (_XMLDoc.ChildNodes.Count > 0) _ListToXMLDocument = _XMLDoc; } catch (Exception __Excp) { new uException(__Excp).GerarLogErro(CtNomeBiblioteca); } return _ListToXMLDocument; } #endregion Receive Code: #region XMLDocumentToTypedList private List<T> XMLDocumentToTypedList<T>(string __XMLDocument) { List<T> _XMLDocumentToTypedList = new List<T>(); try { XmlSerializer _XMLSerial = new XmlSerializer(typeof(List<T>)); MemoryStream _MemStream = new MemoryStream(); StreamWriter _StreamWriter = new StreamWriter(_MemStream, Encoding.UTF8); _StreamWriter.Write(__XMLDocument); _MemStream.Position = 0; _XMLDocumentToTypedList = (List<T>)_XMLSerial.Deserialize(_MemStream); return _XMLDocumentToTypedList; } catch (Exception _Ex) { new uException(_Ex).GerarLogErro(CtNomeBiblioteca); throw _Ex; } } #endregion

    Read the article

  • ERROR: iPhone Private Frameworks "No such file or directory"

    - by WrightsCS
    I have added Private Frameworks To my project. When I build in DEVICE | RELEASE everything works fine and I am able to ldid -S the application and it successfully launches on my device. However, when trying to BUILD AND GO in Simulator, I get the error "No such file or directory" as indicated below: (I also get the error twice which is strange too.) Line Location HomeProfileViewController.h:10: error: BluetoothManager/BluetoothManager.h: No such file or directory Below are the project and build settings that I currently have, maybe someone can find a mistake and let me know, that would be awesome! PROJECT SETTINGS: PRIVATE_HEADERS_FOLDER_PATH = "/Developer/SDKs/iPhoneOS.sdk/Versions/iPhoneOS3.0.sdk/include" PUBLIC_HEADERS_FOLDER_PATH = "/Developer/SDKs/iPhoneOS.sdk/Versions/iPhoneOS3.0.sdk/include" USER_HEADER_SEARCH_PATHS = "/Developer/SDKs/iPhoneOS.sdk/Versions/iPhoneOS3.0.sdk/include" OTHER_CFLAGS = "-I/Developer/SDKs/iPhoneOS.sdk/Versions/iPhoneOS3.0.sdk/include-I/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/include-I/Developer/Platforms/iPhoneOS.platform/Developer/usr/lib/gcc/arm-apple-darwin9/4.0.1/include-F/System/Library/Frameworks-F/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/System/Library/Frameworks-F/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/System/Library/PrivateFrameworks-DMAC_OS_X_VERSION_MAX_ALLOWED=1050" TARGET BUILD SETTINGS: PRIVATE_HEADERS_FOLDER_PATH = "/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/System/Library/PrivateFrameworks" FRAMEWORK_SEARCH_PATHS = "$(inherited) $(SDKROOT)$(SYSTEM_LIBRARY_DIR)/PrivateFrameworks" USER_HEADER_SEARCH_PATHS = "/Developer/SDKs/iPhoneOS.sdk/Versions/iPhoneOS3.0.sdk/include/**" OTHER_CFLAGS = "-I/Developer/SDKs/iPhoneOS.sdk/Versions/iPhoneOS3.0.sdk/include-I/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/include-I/Developer/Platforms/iPhoneOS.platform/Developer/usr/lib/gcc/arm-apple-darwin9/4.0.1/include-F/System/Library/Frameworks-F/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/System/Library/Frameworks-F/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/System/Library/PrivateFrameworks-DMAC_OS_X_VERSION_MAX_ALLOWED=1050" Note: The quotation marks in the paths aren't actually in my project, I put them in so the site will syntax them better. Cydia

    Read the article

  • Class Structure w/ LINQ, Partial Classes, and Abstract Classes

    - by Jason
    I am following the Nerd Dinner tutorial as I'm learning ASP.NET MVC, and I am currently on Step 3: Building the Model. One part of this section discusses how to integrate validation and business rule logic with the model classes. All this makes perfect sense. However, in the case of this source code, the author only validates one class: Dinner. What I am wondering is, say I have multiple classes that need validation (Dinner, Guest, etc). It doesn't seem smart to me to repeatedly write these two methods in the partial class: public bool IsValid { get { return (GetRuleViolations().Count() == 0); } } partial void OnValidate(ChangeAction action) { if (!IsValid) { throw new ApplicationException("Rule violations prevent saving."); } } What I'm wondering is, can you create an abstract class (because "GetRuleViolations" needs to be implemented separately) and extend a partial class? I'm thinking something like this (based on his example): public partial class Dinner : Validation { public IEnumerable<RuleViolation> GetRuleViolations() { yield break; } } This doesn't "feel" right, but I wanted to check with SO to get opinions of individuals smarter than me on this. I also tested it out, and it seems that the partial keyword on the OnValidate method is causing problems (understandably so). This doesn't seem possible to fix (but I could very well be wrong). Thanks!

    Read the article

  • Git-svn branch hoses dcommit when using an odd branch structure

    - by Chuck Vose
    I had a boss, past-tense, who decided to put svn branches in the same folder as trunk. Normally, this wouldn't affect me that much but since I'm using git-svn things are going so well. After I did a fetch it created a folder for each branch in my root folder so I have three folders, drupal, trunk, and client. The drupal folder is git's master branch, client and trunk are the svn branches. Merging and committing works great, in fact everything git related is working superb. However dcommit is totally hosed, it's trying to commit a folder called client and one called trunk. I can't even imagine what havoc this would cause for svn later on. So my question is, what have I done wrong in my .git/config and is there anything I can do to fix this or am I going to have to suffer and go back to using svn? Please don't make me go back. I don't think I can take it anymore. Bastard boss knows how to leave a legacy. [svn-remote "svn"] url = https://svn.mydomain.com/svn/project_name fetch = trunk:refs/remotes/trunk branches = *:refs/remotes/* tags = tags/*:refs/remotes/tags/* Normally the branches line would look like this (when using --stdlayout): branches = branches/*:refs/remotes/branches/* ls output is thus: $ ls client/ docs/ drupal/ sql/ trunk/

    Read the article

  • How to structure a Genetic Algorithm class hierarchy?

    - by MahlerFive
    I'm doing some work with Genetic Algorithms and want to write my own GA classes. Since a GA can have different ways of doing selection, mutation, cross-over, generating an initial population, calculating fitness, and terminating the algorithm, I need a way to plug in different combinations of these. My initial approach was to have an abstract class that had all of these methods defined as pure virtual, and any concrete class would have to implement them. If I want to try out two GAs that are the same but with different cross-over methods for example, I would have to make an abstract class that inherits from GeneticAlgorithm and implements all the methods except the cross-over method, then two concrete classes that inherit from this class and only implement the cross-over method. The downside to this is that every time I want to swap out a method or two to try out something new I have to make one or more new classes. Is there another approach that might apply better to this problem?

    Read the article

  • Heap data structure

    - by turmoil
    Trying to think of a lower bound to the position of say, the nth largest key in a max-heap. Assuming the heap's laid out in array. The upper bound's min(2^n-2, array size -1) i think, but is it always lower bounded by 0?

    Read the article

  • C#, create virtual directory on remote system

    - by sankar
    The following code create only virtual directory on local system , but i need to create on remote sytem ..help me.. Thanks, Sankar DirectoryEntry iisServer; string VirDirSchemaName = "IIsWebVirtualDir"; public DirectoryEntry Connect() { try { if (txtPath.Text.ToLower().Trim() == "localhost") iisServer = new DirectoryEntry("IIS://" + txtPath.Text.Trim() + "/W3SVC/1/Root"); else iisServer = new DirectoryEntry("IIS://" + txtPath.Text + "/Schema/AppIsolated", "XYZ", "xyz"); iisServer.Dispose(); } catch (Exception e) { throw new Exception("Could not connect to: " + txtPath.Text.Trim(), e); } return iisServer; } public void CreateVirtualDirectory(DirectoryEntry iisServer) { DirectoryEntry folderRoot = new DirectoryEntry("IIS://" + txtPath.Text + "/W3SVC/1/Root", "XYZ", "xyz"); folderRoot.RefreshCache(); folderRoot.CommitChanges(); try { DirectoryEntry newVirDir = folderRoot.Children.Add(txtName.Text, VirDirSchemaName); newVirDir.CommitChanges(); newVirDir.Properties["AccessRead"].Add(true); newVirDir.Properties["Path"].Add(@"\\abc\abc"); newVirDir.Invoke("AppCreate", true); newVirDir.CommitChanges(); folderRoot.CommitChanges(); newVirDir.Close(); folderRoot.CommitChanges(); } catch (Exception e) { throw new Exception("Error! Virtual Directory Not Created", e); } } protected void btnCreate_Click(object sender, EventArgs e) { try { CreateVirtualDirectory(Connect()); } catch (Exception ex) { Response.Write(ex.Message); } } protected void Page_Load(object sender, EventArgs e) { }

    Read the article

  • Pyjamas + Django: project without any external libraries

    - by gruszczy
    I would like to create small project using django and pyjamas. I tried googling for some solution on how to merge those two, but I found only projects using some external libraries using json services. Could anyone give me some advice on how to build such project so I wouldn't have to use them? I would like to use django auth system, but I don't know how to build it all without django templates and rendering.

    Read the article

  • Parse and transform XML with missing elements into table structure

    - by dnlbrky
    I'm trying to parse an XML file. A simplified version of it looks like this: x <- '<grandparent><parent><child1>ABC123</child1><child2>1381956044</child2></parent><parent><child2>1397527137</child2></parent><parent><child3>4675</child3></parent><parent><child1>DEF456</child1><child3>3735</child3></parent><parent><child1/><child3>3735</child3></parent></grandparent>' library(XML) xmlRoot(xmlTreeParse(x)) ## <grandparent> ## <parent> ## <child1>ABC123</child1> ## <child2>1381956044</child2> ## </parent> ## <parent> ## <child2>1397527137</child2> ## </parent> ## <parent> ## <child3>4675</child3> ## </parent> ## <parent> ## <child1>DEF456</child1> ## <child3>3735</child3> ## </parent> ## <parent> ## <child1/> ## <child3>3735</child3> ## </parent> ## </grandparent> I'd like to transform the XML into a data.frame / data.table that looks like this: parent <- data.frame(child1=c("ABC123",NA,NA,"DEF456",NA), child2=c(1381956044, 1397527137, rep(NA, 3)), child3=c(rep(NA, 2), 4675, 3735, 3735)) parent ## child1 child2 child3 ## 1 ABC123 1381956044 NA ## 2 <NA> 1397527137 NA ## 3 <NA> NA 4675 ## 4 DEF456 NA 3735 ## 5 <NA> NA 3735 If each parent node always contained all of the possible elements ("child1", "child2", "child3", etc.), I could use xmlToList and unlist to flatten it, and then dcast to put it into a table. But the XML often has missing child elements. Here is an attempt with incorrect output: library(data.table) ## Flatten: dt <- as.data.table(unlist(xmlToList(x)), keep.rownames=T) setnames(dt, c("column", "value")) ## Add row numbers, but they're incorrect due to missing XML elements: dt[, row:=.SD[,.I], by=column][] column value row 1: parent.child1 ABC123 1 2: parent.child2 1381956044 1 3: parent.child2 1397527137 2 4: parent.child3 4675 1 5: parent.child1 DEF456 2 6: parent.child3 3735 2 7: parent.child3 3735 3 ## Reshape from long to wide, but some value are in the wrong row: dcast.data.table(dt, row~column, value.var="value", fill=NA) ## row parent.child1 parent.child2 parent.child3 ## 1: 1 ABC123 1381956044 4675 ## 2: 2 DEF456 1397527137 3735 ## 3: 3 NA NA 3735 I won't know ahead of time the names of the child elements, or the count of unique element names for children of the grandparent, so the answer should be flexible.

    Read the article

  • How can I copy files with timestamps between 2 times and preserve the directory structure

    - by brushwood
    I want to copy files that have timestamps from the time the script begins to run and a hour previous. So I am basically trying to emulate robocopy but with minage and maxage going down to the exact time rather than days. So far I have this in powershell: $now = Get-Date $previousHour = $now.AddHours(-1) "Copying files from $previousHour to $now" function DoCopy ($source, $destination) { $fileList = gci $source -Recurse foreach ($file in $fileList) { if($file.LastWriteTime -lt $now -and $file.LastWriteTime -gt $previousHour) { #Do the copy } } } DoCopy "C:\test" "C:\test2" but if I try to do the copy like that, it copies all the files directly into the destination folder rather than the subfolders.enter code here

    Read the article

  • fluent nHibernate mapping of subclassed structure

    - by Codezy
    I have a workflow class that has a collection of phases, each phase has a collection of tasks. You can design a workflow that will be used by many engagements. When used in engagement I want to be able to add properties to each class (workflow, phase, and task). For example a task in the designer does not have people assigned, but a task in an engagement would need extra properties like who is assigned to it. I have tried many different approaches using subclasses or interfaces but I just can't get it to map the way I want. Currently I have the engagement level versions as subclasses, but I can't get Engagement phases to map to engagement workflows. Public Class WorkflowMapping Inherits ClassMap(Of Workflow) Sub New() Id(Function(x As Workflow) x.Id).Column("Workflow_Id").GeneratedBy.Identity() Map(Function(x As Workflow) x.Description) Map(Function(x As Workflow) x.Generation) Map(Function(x As Workflow) x.IsActive) HasMany(Function(x As Workflow) x.Phases).Cascade.All() End Sub End Class Public Class EngagementWorkflowMapping Inherits SubclassMap(Of EngagementWorkflow) Sub New() Map(Function(x As EngagementWorkflow) x.ClientNo) Map(Function(x As EngagementWorkflow) x.EngagementNo) End Sub End Class How would you approach mapping this in fluent (or hbm) so that you could load just the workflow base class when designing the flow, or the engagement subclass versions of each when being used by an engagement?

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • How to make a structure map powered viewengine in asp.net mvc

    - by Andrew Bullock
    My views extend a base view class ive made: public class BaseView : ViewPage At the moment im calling ObjectFactory.GetInstance inside this class' constructor to get some interface implementations but id like to use structuremap to inject them as constructor arguments. Im using a structuremapcontrollerfactory to create my controllers, but how can i do the same for views? I know i can implement a custom ViewEngine, but using reflector to look at the mvc default viewengine and its dependencies, it seems to go on and on and i'd rather not have to re-implement stuff thats already there. Has anyone got a cunning idea how to solve this? I know i could make things easier with setter instead of constructor injection but id rather avoid that if possible.

    Read the article

  • How to add some complexe structure in multiple places in an XML file

    - by Guillaume
    I have an XML file which has many section like the one below : <Operations> <Action [some attibutes ...]> [some complexe content ...] </Action> <Action [some attibutes ...]> [some complexe content ...] </Action> </Operations> I have to add an to every . It seems that an XSLT should be a good solution to this problem : <xsl:template match="Operations/Action[last()]"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> <Action>[some complexe content ...]</Action> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> My problem is that the content of my contains some xPath expressions. For exemple : <Action code="p_histo01"> <customScript languageCode="gel"> <gel:script xmlns:core="jelly:core" xmlns:gel="jelly:com.niku.union.gel.GELTagLibrary" xmlns:soap="jelly:com.niku.union.gel.SOAPTagLibrary" xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sql="jelly:sql" xmlns:x="jelly:xml" xmlns:xog="http://www.niku.com/xog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <sql:param value="${gel_stepInstanceId}"/> </gel:script> </customScript> </Action> The '${gel_stepInstanceId}' is interpreted by my XSLT but I would like it to be copied as-is. Is that posible ? How ?

    Read the article

  • Use htaccess to redirect all traffic from subdomain to domain without maintaining directory structur

    - by hal10001
    Most examples show how to redirect all subdomain traffic to a primary domain, maintaining the directory structure. I actually don't want this. I want to redirect all subdomain traffic (the site is going away) to the primary domain. This is not working: Options +FollowSymLinks RewriteEngine on RewriteRule (.*) http://www.newdomain.com/ [R=301,L] What happens, is if you go to this: http://sub.newdomain.com/some/path/ You get this: http://www.newdomain.com/some/path/ I want it all to go to the root.

    Read the article

  • Loop through XML::Simple structure

    - by David
    So I have some xml file like this: <?xml version="1.0" encoding="ISO-8859-1"?> <root result="0" > <settings user="anonymous" > <s n="blabla1" > <v>true</v> </s> <s n="blabla2" > <v>false</v> </s> <s n="blabla3" > <v>true</v> </s> </settings> </root> I want to go through all the settings using the XML Simple. Here's what I have when I print the output with Data::Dumper: $VAR1 = { 'settings' => { 'user' => 'anonymous', 's' => [ { 'n' => 'blabla1', 'v' => 'true' }, { 'n' => 'blabla2', 'v' => 'false' }, { 'n' => 'blabla3', 'v' => 'true' } ] }, 'result' => '0' }; And here's my code $xml = new XML::Simple; $data = $xml->XMLin($file); foreach $s (keys %{ $data->{'settings'}->{'s'} }) { print "TEST: $s $data->{'settings'}->{'s'}->[$s]->{'n'} $data->{'settings'}->{'s'}->[$s]->{'v'}<br>\n"; } And it returns these 2 lines, without looping: TEST: n blabla1 true TEST: v blabla1 true I also tried to do something like this: foreach $s (keys %{ $data->{'settings'}->{'s'} }) { Without any success: Type of arg 1 to keys must be hash (not array dereference) How can I procede? What am I doing wrong? Thanks a lot!

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • best way to parse plain text file with a nested information structure

    - by Beffa
    The text file has hundreds of these entries (format is MT940 bank statement) {1:F01AHHBCH110XXX0000000000}{2:I940X N2}{3:{108:XBS/091502}}{4: :20:XBS/091202/0001 :25:5887/507004-50 :28C:140/1 :60F:C0914CHF7789, :61:0912021202D36,80NTRFNONREF//0887-1202-29-941 04392579-0 LUTHY + xxx, ZUR :86:6034?60LUTHY + xxxx, ZUR vom 01.12.09 um 16:28 Karten-Nr. 2232 2579-0 :62F:C091202CHF52,2 :64:C091302CHF52,2 -} This should go into an Array of Hashes like [{"1"=>"F01AHHBCH110XXX0000000000"}, "2"=>"I940X N2", 3 => {108=>"XBS/091502"} etc. } ] I tried it with tree top, but it seemed not to be the right way, because it's more for something you want to do calculations on, and I just want the information. grammar Mt940 rule document part1:string spaces [:|/] spaces part2:document { def eval(env={}) return part1.eval, part2.eval end } / string / '{' spaces document spaces '}' spaces { def eval(env={}) return [document.eval] end } end end I also tried with a regular expression matches = str.scan(/\A[{]?([0-9]+)[:]?([^}]*)[}]?\Z/i) but it's difficult with recursion ... How can I solve this problem?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >