Search Results

Search found 676 results on 28 pages for 'mappings'.

Page 21/28 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Nhibernate Fluent domain Object with Id(x => x.id).GeneratedBy.Assigned not saveable

    - by urpcor
    Hi there, I am using for some legacy db the corresponding domainclasses with mappings. Now the Ids of the entities are calculated by some stored Procedure in the DB which gives back the Id for the new row.(Its legacy, I cant change this) Now I create the new entity , set the Id and Call Save. But nothing happens. no exeption. Even NH Profiler does not say a bit. its as the Save call does nothing. I expect that NH thinks that the record is already in the db because its got an Id already. But I am using Id(x = x.id).GeneratedBy.Assigned() and intetionally the Session.Save(object) method. I am confused. I saw so many samples there it worked. does any body have any ideas about it? public class Appendix { public virtual int id { get; set; } public virtual AppendixHierarchy AppendixHierachy { get; set; } public virtual byte[] appendix { get; set; } } public class AppendixMap : ClassMap<Appendix> { public AppendixMap () { WithTable("appendix"); Id(x => x.id).GeneratedBy.Assigned(); References(x => x.AppendixHierachy).ColumnName("appendixHierarchyId"); Map(x => x.appendix); } }

    Read the article

  • one-to-many with criteria question

    - by brnzn
    enter code hereI want to apply restrictions on the list of items, so only items from a given dates will be retrieved. Here are my mappings: <class name="MyClass" table="MyTable" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="myProp" type="string" column="prop"/> <list name="items" inverse="true" cascade="none"> <key column="myId"/> <list-index column="itemVersion"/> <one-to-many class="Item"/> </list> </class> <class name="Item" table="Items" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="itemVersion" type="string" column="version"/> <property name="startDate" type="date" column="startDate"/> </class> I tried this code: Criteria crit = session.createCriteria(MyClass.class); crit.add( Restrictions.eq("myId", new Integer(1))); crit = crit.createCriteria("items").add( Restrictions.le("startDate", new Date()) ); which result the following quires: select ... from MyTable this_ inner join Items items1_ on this_.myId=items1_.myId where this_.myId=? and items1_.startDate<=? followed by select ... from Items items0_ where items0_.myId=? But what I need is something like: select ... from MyTable this_ where this_.myId=? followed by select ... from Items items0_ where items0_.myId=? and items0_.startDate<=? Any idea how I can apply a criteria on the list of items?

    Read the article

  • How can I map to a field that is joined in via one of three possible tables

    - by Mongus Pong
    I have this object : public class Ledger { public virtual Person Client { get; set; } // .... } The Ledger table joins to the Person table via one of three possible tables : Bill, Receipt or Payment. So we have the following tables : Ledger LedgerID PK Bill BillID PK, LedgerID, ClientID Receipt ReceiptID PK, LedgerID, ClientID Payment PaymentID PK, LedgerID, ClientID If it was just the one table, I could map this as : Join ( "Bill", x => { x.ManyToOne ( ledger => ledger.Client, mapping => mapping.Column ( "ClientID" ) ); x.Key ( l => l.Column ( "LedgerID" ) ); } ); This won't work for three tables. For a start the Join performs an inner join. Since there will only ever be one of Bill, Receipt or Payment - an inner join on these tables will always return zero rows. Then it would need to know to do a Coalesce on the ClientID of each of these tables to know the ClientID to grab the data from. Is there a way to do this? Or am I asking too much of the mappings here?

    Read the article

  • Robotlegs: Warning: Injector already has a rule for type

    - by MikeW
    I have a bunch of warning messages like this appear when using Robotlegs/Signals. Everytime this command class executes, which is every 2-3 seconds ..this message displays below If you have overwritten this mapping intentionally you can use "injector.unmap()" prior to your replacement mapping in order to avoid seeing this message. Warning: Injector already has a rule for type "mx.messaging.messages::IMessage", named "". The command functions fine otherwise but I think I'm doing something wrong anyhow. public class MessageReceivedCommand extends SignalCommand { [Inject] public var message:IMessage; ...etc.. do something with message.. } the application context doesnt map IMessage to this command, as I only see an option to mapSignalClass , besides the payload is received fine. Wonder if anyone knows how I might either fix or suppress this message. I've tried calling this as the warning suggests injector.unmap(IMessage, "") but I receive an error - no mapping found for ::IMessage named "". Thanks Edit: A bit more info about the error Here is the signal that I dispatch to the command public class GameMessageSignal extends Signal { public function GameMessageSignal() { super(IMessage); } } which is dispatched from a IPushDataService class gameMessage.dispatch(message.message); and the implementation is wired up in the app context via injector.mapClass(IPushDataService, PushDataService); along with the signal signalCommandMap.mapSignalClass(GameMessageSignal, MessageReceivedCommand); Edit #2: Probably good to point out also I inject an instance of GameMessageSignal into IPushDataService public class PushDataService extends BaseDataService implements IPushDataService { [Inject] public var gameMessage:GameMessageSignal; //then private function processMessage(message:MessageEvent):void { gameMessage.dispatch(message.message); } } Edit:3 The mappings i set up in the SignalContext: injector.mapSingleton(IPushDataService); injector.mapClass(IPushDataService, PushDataService);

    Read the article

  • How to map oracle timestamp to appropriate java type in hibernate?

    - by jschoen
    I am new to hibernate and I am stumped. In my database I have tables that have a columns of TIMESTAMP(6). I am using Netbeans 6.5.1 and when I generate the hibernate.reveng.xml, hbm.xml files, and pojo files it sets the columns to be of type Serializable. This is not what I expected, nor what I want them to be. I found this post on the hibernate forums saying to place: in the hibernate.reveng.xml file. In Netbeans you are not able to generate the mappings from this file (it creates a new one every time) and it does not seem to have the ability to re-generate them from the file either (at least according to this it is slated to be available in version 7). So I am trying to figure out what to do. I am more inclined to believe I am doing something wrong since I am new to this, and it seems like it would be a common problem for others. So what am I doing wrong? If I am not doing anything wrong, how do I work around this? I am using Netbeans 6.5, Oracle 10G, and I believe Hibernate 3 (it came with my netbeans). Edit: Meant to say I found this stackoverflow question, but it is really a different problem.

    Read the article

  • Simple Fluent NHibernate Mapping Problem

    - by user500038
    I have the following tables I need to map: +-------------------------+ | Person | +-------------------------+ | PersonId | | FullName | +-------------------------+ +-------------------------+ | PersonAddress | +-------------------------+ | PersonId | | AddressId | | IsDefault | +-------------------------+ +-------------------------+ | Address | +-------------------------+ | AddressId | | State | +-------------------------+ And the following classes: public class Person { public virtual int Id { get; set; } public virtual string FullName { get; set; } } public class PersonAddress { public virtual Person Person { get; set; } public virtual Address Address { get; set; } public virtual bool IsDefault { get; set; } } public class Address { public virtual int Id { get; set; } public virtual string State { get; set; } } And finally the mappings: public class PersonMap : ClassMap<Person> { public PersonMap() { Id(x => x.Id, "PersonId"); } } public class PersonAddressMap : ClassMap<PersonAddress> { public PersonAddressMap() { CompositeId().KeyProperty(x => x.Person, "PersonID") .KeyProperty(x => x.Address, "AddressID"); } } public class AddressMap: ClassMap<Address> { public AddressMap() { Id(x => x.Id, "AddressId"); } } Assume I cannot alter the tables. If I take the mapping class (PersonAddress) out of the equation, everything works fine. If I put it back in I get: Could not determine type for: Person, Person, Version=1.0, Culture=neutral, PublicKeyToken=null, for columns: NHibernate.Mapping.Column(PersonId) What am I missing here? I'm sure this must be simple.

    Read the article

  • Hibernate many-to-one - bad usage?

    - by DaveA
    Just trying out Hibernate (with Annotations) and I'm having problems with my mappings. I have two entity classes, AudioCD and Artist. @Entity public class AudioCD implements CatalogItem { @Id @GeneratedValue(strategy = GenerationType.AUTO) private int id; private String title; @ManyToOne(cascade = { CascadeType.ALL }, optional = false) private Artist artist; .... } @Entity @Table(uniqueConstraints = { @UniqueConstraint(columnNames = { "name" }) }) public class Artist { @Id @GeneratedValue(strategy = GenerationType.AUTO) private int id; @Column(nullable = false) private String name; ..... } I get AudioCD objects from an external source. When I try to persist the AudioCD the Artist gets persisted as well, just like I want to happen. If I try persisting another different CD, but Artist already exists I get errors due to constraint violations. I want Hibernate to recognise that the Artist already exists and shouldn't be inserted again. Can this be done via annotations? Or do I have to manage the persistence of the AudioCD and Artist seperately?

    Read the article

  • How to map keys in vim differently for different kinds of buffers

    - by Yogesh Arora
    The problem i am facing is that i have mapped some keys and mouse events for seraching in vim while editing a file. But those mappings impact the functionality if the quickfix buffer. I was wondering if it is possible to map keys depending on the buffer in which they are used. EDIT - I am adding more info for this question Let us consider a scenario. I want to map <C-F4> to close a buffer/window. Now this behavior could depend on a number of things. If i am editing a buffer it should just close that buffer without changing the layout of the windows. I am using buffkil plugin for this. It does not depend on extension of file but on the type of buffer. I saw in vim documentation that there are unlisted and listed buffer. So if it is listed buffer it should close using bufkill commands. If it is not a listed buffer it should use <c-w>c command to close buffer and changing the window layout. I am new at writing vim functions/scripts, can someone help me getting started on this

    Read the article

  • Fluent NHibernate - Set reference key columns to null

    - by Matt
    Hi, I have a table of Appointments and a table of AppointmentOutcomes. On my Appointments table I have an OutcomeID field which has a foreign key to AppointmentOutcomes. My Fluent NHibernate mappings look as follows; Table("Appointments"); Not.LazyLoad(); Id(c => c.ID).GeneratedBy.Assigned(); Map(c => c.Subject); Map(c => c.StartTime); References(c => c.Outcome, "OutcomeID"); Table("AppointmentOutcomes"); Not.LazyLoad(); Id(c => c.ID).GeneratedBy.Assigned(); Map(c => c.Description); Using NHibernate, if I delete an AppointmentOutcome an exception is thrown because the foreign key is invalid. What I would like to happen is that deleting an AppointmentOutcome would automatically set the OutcomeID of any Appointments that reference the AppointmentOutcome to NULL. Is this possible using Fluent NHibernate?

    Read the article

  • one-to-many with criteria question

    - by brnzn
    enter code hereI want to apply restrictions on the list of items, so only items from a given dates will be retrieved. Here are my mappings: <class name="MyClass" table="MyTable" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="myProp" type="string" column="prop"/> <list name="items" inverse="true" cascade="none"> <key column="myId"/> <list-index column="itemVersion"/> <one-to-many class="Item"/> </list> </class> <class name="Item" table="Items" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="itemVersion" type="string" column="version"/> <property name="startDate" type="date" column="startDate"/> </class> I tried this code: Criteria crit = session.createCriteria(MyClass.class); crit.add( Restrictions.eq("myId", new Integer(1))); crit = crit.createCriteria("items").add( Restrictions.le("startDate", new Date()) ); which result the following quires: select ... from MyTable this_ inner join Items items1_ on this_.myId=items1_.myId where this_.myId=? and items1_.startDate<=? followed by select ... from Items items0_ where items0_.myId=? But what I need is something like: select ... from MyTable this_ where this_.myId=? followed by select ... from Items items0_ where items0_.myId=? and items0_.startDate<=? Any idea how I can apply a criteria on the list of items?

    Read the article

  • CSS file in a Spring WAR returns a 404

    - by Rachel G.
    I have a J2EE application that I am building with Spring and Maven. It has the usual project structure. Here is a bit of the hierarchy. MyApplication src main webapp WEB-INF layout header.jsp styles main.css I want to include that CSS file in my JSP. I have the following tag in place. <c:url var="styleSheetUrl" value="/styles/main.css" /> <link rel="stylesheet" href="${styleSheetUrl}"> When I deploy the application, the CSS page isn't being located. When I view the page source, the href is /MyApplication/styles/main.css. Looking inside the WAR, there is a /styles/main.css. However, I get a 404 when I try to access the CSS file directly in the browser. I discovered that the reason for the issue was the Dispatcher Servlet mapping. The mapping looks as follows. <servlet-mapping> <servlet-name>Spring MVC Dispatcher Servlet</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> I imagine the Dispatcher Servlet doesn't know how to handle the CSS request. What is the best way to handle this issue? I would rather not have to change all of my request mappings.

    Read the article

  • Best practice when removing entity regarding mappedBy collections?

    - by Daniel Bleisteiner
    I'm still kind of undecided which is the best practice to handle em.remove(entity) with this entity being in several collections mapped using mappedBy in JPA. Consider an entity like a Property that references three other entities: a Descriptor, a BusinessObject and a Level entity. The mapping is defined using @ManyToOne in the Property entity and using @OneToMany(mappedBy...) in the other three objects. That inverse mapping is defined because there are some situations where I need to access those collections. Whenever I remove a Property using em.remove(prop) this element is not automatically removed from managed entities of the other three types. If I don't care about that and the following page load (webapp) doesn't reload those entities the Property is still found and some decisions might be taken that are no longer true. The inverse mappings may become quite large and though I don't want to use something like descriptor.getProperties().remove(prop) because it will load all those properties that might have been lazy loaded until then. So my currently preferred way is to refresh the entity if it is managed: if (em.contains(descriptor)) em.refresh(descriptor) - which unloads a possibly loaded collection and triggers a reload upon the next access. Is there another feasible way to handle all those mappedBy collections of already loaded entites?

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Apache virtual host does not work properly

    - by Jori
    I have read a lot of information all over the Internet regarding this subject, and can not figure out what I'am doing wrong. I'm trying to host two websites under different names locally under Windows 7 with Apaches Virtual Hosting functionality. This is what I have done already: In the httpd.conf file I uncommented the following line, so that the virtual host configuration file will be included in the main configuration sequence. # Virtual hosts Include conf/extra/httpd-vhosts.conf This is how I edited my httpd-vhosts.conf: # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host.localhost" # ServerName dummy-host.localhost # ServerAlias www.dummy-host.localhost # ErrorLog "logs/dummy-host.localhost-error.log" # CustomLog "logs/dummy-host.localhost-access.log" common #</VirtualHost> # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host2.localhost" # ServerName dummy-host2.localhost # ErrorLog "logs/dummy-host2.localhost-error.log" # CustomLog "logs/dummy-host2.localhost-access.log" common #</VirtualHost> <VirtualHost *:80> ServerName arterieur DocumentRoot "J:/webcontent/www20" <Directory "J:/webcontent/www20"> Order allow,deny Allow from all </Directory> </VirtualHost> As you can see I commented the Virtual Host examples out and added my own one (I did one for this example). Also am I sure that J:\webcontent\www20 exists. At last I edited the Windows host file located in: C:\Windows\System32\drivers\etc\hosts, now it looks this: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 arterieur Then I restarted Apache with the Apache Service Monitor, and it gave me the following fatal error: The requested operation has failed!, I tried to look at the apache/logs/error.log file but I did not log anything, I guess it only logs the errors after startup. Does anyone knows what I'am doing wrong?

    Read the article

  • Silverlight Cream for February 21, 2011 -- #1049

    - by Dave Campbell
    In this Issue: Rob Eisenberg(-2-), Gill Cleeren, Colin Eberhardt, Alex van Beek, Ishai Hachlili, Ollie Riches, Kevin Dockx, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), and John Papa. Above the Fold: Silverlight: "Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4" Alex van Beek WP7: "Google Sky on Windows Phone 7" Colin Eberhardt Shoutouts: My friends at SilverlightShow have their top 5 for last week posted: SilverlightShow for Feb 14 - 20, 2011 From SilverlightCream.com: Rob Eisenberg MVVMs Us with Caliburn.Micro! Rob Eisenberg chats with Carl and Richard on .NET Rocks episode 638 about Caliburn.Micro which takes Convention-over-Configuration further, utilizing naming conventions to handle a large number of data binding, validation and other action-based characteristics in your app. Two Caliburn Releases in One Day! Rob Eisenberg also announced that release candidates for both Caliburn 2.0 and Caliburn.Micro 1.0 are now available. Check out the docs and get the bits. Getting ready for Microsoft Silverlight Exam 70-506 (Part 6) Gill Cleeren has Part 6 of his series on getting ready for the Silverlight Exam up at SilverlightShow.... this time out, Gill is discussing app startup, localization, and using resource dictionaries, just to name a few things. Google Sky on Windows Phone 7 Colin Eberhardt has a very cool WP7 app described where he's using Google Sky as the tile source for Bing Maps, and then has a list of 110 Messier Objects.. interesting astronomical objects that you can look at... all with source! Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4 Alex van Beek has some Prism4/Unity MVVM goodness up with this discussion of a login module using View and ViewModel base classes. Windows Phone 7 and WCF REST – Authentication Solutions Ishai Hachlili sent me this link to his post about WCF REST web service and authentication for WP7, and he offers up 2 solutions... from the looks of this, I'm also putting his blog on my watch list WP7Contrib: Isolated Storage Cache Provider Ollie Riches has a complete explanation and code example of using the IsolatedStorageCacheProvider in their WP7Contrib library. Using a ChannelFactory in Silverlight, part two: binary cows & new-born calves Kevin Dockx follows-up his post on Channel Factories with this part 2, expanding the knowledge-base into usin parameters and custom binding with binary encoding, both from reader suggestions. All about UriMapping in WP7 WindowsPhoneGeek has a post up about URI mappings in WP7 ... what it is, how to enable it in code behind or XAML, then using it either with a hyperlink button or via the NavigationService class... all with code. Passing WP7 Memory Consumption requirements with the Coding4Fun MemoryCounter tool WindowsPhoneGeek's latest is a tutorial on the use of the Memory Counter control from the Coding4Fun toolkit and WP7 Memory consumption. Getting Started With Linq Jesse Liberty gets into LINQ in his Episode 33 of his WP7 'From Scratch' series... looks like a good LINQ starting point, and he's going to be doing a series on it. Linq with Objects In his second post on LINQ, Jesse Liberty is looking at creating a Linq query against a collection of objects... always good stuff, Jesse! Silverlight TV Silverlight TV 62: The Silverlight 5 Triad Unplugged John Papa is joined by Sam George, Larry Olson, and Vijay Devetha (the Silverlight Triad) on this Silverlight TV episode 62 to discuss how the team works together, and hey... they're hiring! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Oracle Unified Method (OUM) Release 5.6

    - by user714714
    ORACLE® UNIFIED METHOD RELEASE 5.6 Oracle’s Full Lifecycle Methodfor Deploying Oracle-Based Business Solutions About | Release | Access | Previous Announcements About Oracle is evolving the Oracle® Unified Method (OUM) to achieve the vision of supporting the entire Enterprise IT Lifecycle, including support for the successful implementation of every Oracle product. OUM replaces Legacy Methods, such as AIM Advantage, AIM for Business Flows, EMM Advantage, PeopleSoft's Compass, and Siebel's Results Roadmap. OUM provides an implementation approach that is rapid, broadly adaptive, and business-focused. OUM includes a comprehensive project and program management framework and materials to support Oracle's growing focus on enterprise-level IT strategy, architecture, and governance. Release OUM release 5.6 provides support for Application Implementation, Cloud Application Implementation, and Software Upgrade projects as well as the complete range of technology projects including Business Intelligence (BI) and Enterprise Performance Management (EPM), Enterprise Security, WebCenter, Service-Oriented Architecture (SOA), Application Integration Architecture (AIA), Business Process Management (BPM), Enterprise Integration, and Custom Software. Detailed techniques and tool guidance are provided, including a supplemental guide related to Oracle Tutor and UPK. This release features: Business Process Management (BPM) Project Engineering Supplemental Guide Cloud Roadmap View and Supplemental Guide Enterprise Security View and Supplemental Guide Service-Oriented Architecture (SOA) Governance Implementation Supplemental Guide "Tailoring OUM for Your Project" White Paper OUM Microsoft Project Workplan Template and User's Guide Mappings: OUM to J.D. Edwards OneMethodology, OUM Roles to Task Techniques: Determining Number of Iterations, Managing an OUM Project using Scrum Templates: Scrum Workplan (WM.010), Siebel CRM Enhanced / Updated: Manage Focus Area reorganized by Activities for all Views Oracle Architecture Development Process (OADP) View updated for OADP v3.0 Oracle Support Services Supplemental Guide expanded to include guidance related to IT Change Management Oracle User Productivity Kit Professional (UPK Pro) and Tutor Supplemental Guide expanded guidance for UPK Pro Service-Oriented Architecture (SOA) Application Integration Architecture (AIA) Supplemental Guide updated for SOA Tactical Project Delivery View Service-Oriented Architecture (SOA) Tactical Project Delivery View expanded to include additional tasks Siebel CRM Supplemental Guide expanded task guidance and added select Siebel-specific OUM templates WebCenter View and Supplemental Guide updated for WebCenter Portal and Content Management For a comprehensive list of features and enhancements, refer to the "What's New" page of the Method Pack. Upcoming releases will provide expanded support for Oracle's Enterprise Application suites including product-suite specific materials and guidance for tailoring OUM to support various engagement types. Access Oracle Customers Oracle customers may obtain copies of the method for their internal use – including guidelines, templates, and tailored work breakdown structure – by contracting with Oracle for a consulting engagement of two weeks or longer and meeting some additional minimum criteria. Customers, who have a signed consulting contract with Oracle and meet the engagement qualification criteria, are permitted to download the current release of OUM for their perpetual use. They may also obtain subsequent releases published during a renewable, three-year access period. Training courses are also available to these customers. Contact your local Oracle Sales Representative about enrolling in the OUM Customer Program. Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold Partners OPN Diamond, Platinum, and Gold Partners are able to access the OUM method pack, training courses, and collateral from the OPN Portal at no additional cost: Go to the OPN Portal at partner.oracle.com. Select the "Partners (Login Required)" tab. Login. Select the "Engage with Oracle" tab. From the Engage with Oracle page, locate the "Applications" heading. From the Applications heading, locate and select the "Oracle Unified Method" link. From the Oracle Unified Method Knowledge Zone, select the "Implement" tab. From the Implement tab, select the "Tools and Resources" link. Locate and select the "Oracle Unified Method (OUM)" link. Previous Announcements Oracle Unified Method (OUM) Release 5.6 Oracle Unified Method (OUM) Release 5.5 Oracle Unified Method (OUM) Release 5.4 Oracle EMM Advantage Retired Retirement of Oracle EMM Advantage Planned for December 01, 2011

    Read the article

  • virtual host not working in windows7 xampp

    - by K.B Panamaldeniya-littletipz
    hi i am using windows7 and xampp , i want to create a virtual host . so i added 127.0.0.1 myawesomeproject to my C:\Windows\System32\drivers\etc\hosts like this # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. 127.0.0.1 localhost 127.0.0.1 myawesomeproject ::1 localhost and i added some lines to C:\xampp\apache\conf\extra\httpd-vhosts.conf like this # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host.localhost" ##ServerName dummy-host.localhost ##ServerAlias www.dummy-host.localhost ##ErrorLog "logs/dummy-host.localhost-error.log" ##CustomLog "logs/dummy-host.localhost-access.log" combined ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host2.localhost" ##ServerName dummy-host2.localhost ##ServerAlias www.dummy-host2.localhost ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined ##</VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName localhost </VirtualHost> <VirtualHost *> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot c:\myawesomeproject ServerName localhost <Directory "c:\myawesomeproject"> Order allow,deny Allow from all </Directory> </VirtualHost> i created a folder called myawesomeproject in my c drive . when i type http://myawesomeproject it is rederecting to http://myawesomeproject/xampp i added another folder 'test' inside myawesomeproject . so the path to 'test' is C:/myawesomeproject/test . the problem is when i type http://myawesomeproject/test it gives an error. it says Object not found! The requested URL was not found on this server. If you entered the URL manually please check your spelling and try again. If you think this is a server error, please contact the webmaster. Error 404 myawesomeproject 8/22/2011 4:30:29 PM Apache/2.2.17 (Win32) mod_ssl/2.2.17 OpenSSL/0.9.8o PHP/5.3.4 mod_perl/2.0.4 Perl/v5.10.1 why is this . how can i create a virtual host........................ :(

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >