Search Results

Search found 1831 results on 74 pages for 'metadata'.

Page 64/74 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Silverlight 4 WCF RIA Services and MVVM is not as simple

    - by Thomas Jaskula
    [Disclaimer: I'm ASP.NET MVC Developer] Hi, I'm looking for some best practices with implementing MVVM pattern with WCF RIA in Silverlight 4. I'm not looking to use MEF of IoC for locating my ViewModels. What I would like to know is how to apply MVVM pattern with Silverlight 4 and WCF RIA. I don't want to use other stuff like Prism or MVVM Light toolkit. I found many examples on Internet showing how it is wonderful to drag and drop a datasource on the view and the job is done (it reminds me about my first VB6 developments). I tried to implement MVVM with WCF RIA and it's not strightforward at all. If I understand, the MVVM should contain all the logic in order to unit test it in isolation but when it comes to combine it with WCF RIA it's another story. I have the following questions. Can I use a generated metadata as model ? It would be easier to use it that if I write all from the scratch. As I saw the only way I could get data is through DomainContext or through direct binding in the view (local ressource). I don't want the direct binding in the view, not testable at all. On the other hand I can't use DomainContext, it doesn't expose any single entity !!! All I have is the EntitySet that I can bind to datagrid. How do I bind a single Entity to the DataForm from the ViewModel ? How do I udpate the model to the database ? How do I navigate from one Entity to a collection of it's itemps. For example if I have a Company Entity I would like to show a DataFrom to update a entite informations and a datagrid to show companies adresses. When saving a form would like to save information to Company and for example an information avout which adress was selected as active. Please help me understand how to do it well. Or maybe I should drop the WCF RIA and to do it with WCF from scratch ? What do you think ?

    Read the article

  • FILESTREAM/FILETABLE Clarifications for Implementation

    - by user1209734
    Recently our team was looking at FILESTREAM to expand the capabilities of our proprietary application. The main purpose of this app is managing the various PDFS, Images and documents to all of the parts we manufacture. Our ASP application uses a few third party tools to allow viewing of these files. We currently have 980GB of data on the Fileserver. We have around 200GB of Binary data in SQL Server that we would like to extract since it is not performing well hence FILESTREAM seems to be a good compromise to the two major data storage/access issues. A few things are not exactly clear to us: FILESTREAM Can or Cannot store its data on a drive that is not locally attached. We already have a File Server with a RAID 10 (1.5TB drives). This server stores all of the documents right now, would we have to move these drives to the SQL Server for FILESTREAM? That would be a tough bullet to bite since the server also is doubling as the Application Server (Two VMs on one physical server). FILETABLE stores the common metadata about the files but where is the Full Text part of it stored to allow searching of files like doc/docx? Is this separate? Are you able to freely add criteria to this to search by? If so any links to clarify would be appreciated. Can FILETABLE be referenced in another table with a foreign key? Thank you in advance EDIT: For those having these questions this web video covered everything and more in terms of explaining filestream from 2008 to 2012 and the cavets to consider (I would seriously rep him if I could): http://channel9.msdn.com/Events/TechDays/Techdays-2012-the-Netherlands/2270 In conclusion we will not be using FILESTREAM as it would be way to huge of an upsurge to accommodate for investment. EDIT 2: Update to #1 - After carefully assessing FileTable in addition to FILESTREAM we got a winning combination. We did have to move the files over to the new server (wasn't to painful since they were on the same VM).It honestly took more time to write an extraction tool to dump the binary data within SQL to the File System. Update to #2 - This was seperate but again Bob had an excellent webinar explaining this: http://channel9.msdn.com/Events/TechEd/Europe/2012/DBI411 Update to #3 - Using TFT inheritance we recycled the Docs table we had (minus the huge binary blobs) which required very little changes in our legacy apps. This was a huge upshot for the developer team.

    Read the article

  • Asp.Net MVC Data Annotations. How to get client side validation on 2 properties being equal

    - by Mark
    How do you get client side validation on two properties such as the classic password confirm password scenario. I'm using a metadata class based on EF mapping to my DB table, heres the code. The commented out attributes on my class will get me server side validation but not client side. [MetadataType(typeof(MemberMD))] public partial class Member { //[CustomValidation(typeof(MemberMD), "Verify", ErrorMessage = "The password and confirmation password did not match.")] //[PropertiesMustMatch("Password", "ConfirmPassword", ErrorMessage = "The password and confirmation password did not match.")] public class MemberMD { [Required(ErrorMessage = "Name is required.")] [StringLength(50, ErrorMessage = "No more than 50 characters")] public object Name { get; set; } [Required(ErrorMessage = "Email is required.")] [StringLength(50, ErrorMessage = "No more than 50 characters.")] [RegularExpression(".+\\@.+\\..+", ErrorMessage = "Valid email required e.g. [email protected]")] public object Email { get; set; } [Required(ErrorMessage = "Password is required.")] [StringLength(30, ErrorMessage = "No more than 30 characters.")] [RegularExpression("[\\S]{6,}", ErrorMessage = "Must be at least 6 characters.")] public object Password { get; set; } [Required] public object ConfirmPassword { get; set; } [Range(0, 150), Required] public object Age { get; set; } [Required(ErrorMessage = "Postcode is required.")] [RegularExpression(@"^[a-zA-Z0-9 ]{1,10}$", ErrorMessage = "Postcode must be alphanumeric and no more than 10 characters in length")] public object Postcode { get; set; } [DisplayName("Security Question")] [Required] public object SecurityQuestion { get; set; } [DisplayName("Security Answer")] [Required] [StringLength(50, ErrorMessage = "No more than 50 characters.")] public object SecurityAnswer { get; set; } public static ValidationResult Verify(MemberMD t) { if (t.Password == t.ConfirmPassword) return ValidationResult.Success; else return new ValidationResult(""); } } Any help would be greatly appreciated, as I have only been doing this 5 months please try not to blow my mind.

    Read the article

  • Why is Html.DropDownList() producing <select name="original_name.name"> instead of just <select name

    - by DanM
    I have an editor template whose job is to take a SelectList as its model and build a select element in html using the Html.DropDownList() helper extension. I'm trying to assign the name attribute for the select based on a ModelMetadata property. (Reason: on post-back, I need to bind to an object that has a different property name for this item than the ViewModel used to populate the form.) The problem I'm running into is that DropDownList() is appending the name I'm providing instead of replacing it, so I end up with names like categories.category instead of category. Here is some code for you to look at... SelectList.ascx <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<System.Web.Mvc.SelectList>" %> <%= Html.DropDownList( (string)ViewData.ModelMetadata.AdditionalValues["PropertyName"], Model) %> Resulting HTML <select id="SkillLevels_SkillLevel" name="SkillLevels.SkillLevel"> <option value="1">High</option> <option value="2">Med</option> <option selected="selected" value="3">Low</option> </select> Expected HTML <select id="SkillLevels_SkillLevel" name="SkillLevel"> <option value="1">High</option> <option value="2">Med</option> <option selected="selected" value="3">Low</option> </select> Also tried <%= Html.Encode((string)ViewData.ModelMetadata.AdditionalValues["PropertyName"])%> ...which resulted in "SkillLevel" (not "SkillLevels.SkillLevel"), proving that the data stored in metadata is correct. and <%= Html.DropDownList( (string)ViewData.ModelMetadata.AdditionalValues["PropertyName"], Model, new { name = (string)ViewData.ModelMetadata.AdditionalValues["PropertyName"] }) %> ...which still resulted in <select name=SkillLevels.Skilllevel>. Questions What's going on here? Why does it append the name instead of just using it? Can you suggest a good workaround? Update: I ended up writing a helper extension that literally does a find/replace on the html text: public static MvcHtmlString BindableDropDownListForModel(this HtmlHelper helper) { var propertyName = (string)helper.ViewData.ModelMetadata.AdditionalValues["PropertyName"]; var compositeName = helper.ViewData.ModelMetadata.PropertyName + "." + propertyName; var rawHtml = helper.DropDownList(propertyName, (SelectList)helper.ViewData.Model); var bindableHtml = rawHtml.ToString().Replace(compositeName, propertyName); return MvcHtmlString.Create(bindableHtml); }

    Read the article

  • Eclipse Dynamic Web Project fails to load under WTP

    - by Cue
    When trying to run an Eclipse Dynamic Web Project under a Tomcat setup using WTP, it fails with the attached stacktrace. Checklist At the project properties, under "Java EE Module Dependencies" I have checked the "Maven Dependencies" At the wtp deploy directory, under lib indeed all dependencies are present (esp. struts-taglib-1.3.10.jar) On the other hand, if I package with maven and copy the war file under the webapps directory everything is working as normal. Specification Eclipse JEE Galileo SR2 (with WTP 3.1.1) Tomcat 6.0.26 Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-10M3025) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) stacktrace org.apache.jasper.JasperException: Unable to read TLD "META-INF/tld/struts-html-el.tld" from JAR file "file:/Users/cue/Development/workspace/eclipse/.metadata/.plugins/org.eclipse.wst.server.core/tmp2/wtpwebapps/ticketing/WEB-INF/lib/struts-el-1.3.10.jar": org.apache.jasper.JasperException: Failed to load or instantiate TagExtraInfo class: org.apache.struts.taglib.html.MessagesTei at org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:51) at org.apache.jasper.compiler.ErrorDispatcher.dispatch(ErrorDispatcher.java:409) at org.apache.jasper.compiler.ErrorDispatcher.jspError(ErrorDispatcher.java:181) at org.apache.jasper.compiler.TagLibraryInfoImpl.<init>(TagLibraryInfoImpl.java:182) at org.apache.jasper.compiler.Parser.parseTaglibDirective(Parser.java:383) at org.apache.jasper.compiler.Parser.parseDirective(Parser.java:446) at org.apache.jasper.compiler.Parser.parseElements(Parser.java:1393) at org.apache.jasper.compiler.Parser.parse(Parser.java:130) at org.apache.jasper.compiler.ParserController.doParse(ParserController.java:255) at org.apache.jasper.compiler.ParserController.parse(ParserController.java:103) at org.apache.jasper.compiler.Compiler.generateJava(Compiler.java:185) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:347) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:327) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:314) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:589) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:317) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • WSIT, Maven, and wsimport -- Can They Work Together?

    - by rtperson
    Hi all, I'm working on a small-ish multi-module project in Maven. We've separated the UI from the database layer using Web Services, and thanks to the jaxws-maven-plugin, the creation of the WSDL and WS client are more or less handled for us. (The plugin is essentially a wrapper around wsgen and wsimport.) So far so good. The problem comes when I try to layer WSIT security into the picture. NetBeans allows me to generate the security metadata easily, but wsimport seems completely incapable of dealing with anything beyond a Basic-auth level of security. Here's our current, insecure way of calling wsimport during a Maven build: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxws-maven-plugin</artifactId> <version>1.10</version> <executions> <execution> <goals> <goal>wsimport</goal> </goals> <configuration> <wsdlUrls> <wsdlUrl>${basedir}/../WebService/target/jaxws/wsgen/wsdl/WebService.wsdl</wsdlUrl> </wsdlUrls> <packageName>com.yourcompany.appname.ws.client</packageName> <sourceDestDir>${basedir}/src/main/java</sourceDestDir> <destDir>${basedir}/target/jaxws</destDir> </configuration> </execution> </executions> </plugin> I have tried playing around with xauthFile, xadditionalHeaders, passing javax.xml.ws.security.auth.username and password through args. I have also tried using wsimport from the command line to point to the Tomcat-generated WSDL, which has the additional security info. Nothing, however, seems to change the composition of the wsimport-generated files at all. So I guess my question here is, to get a WSIT-compliant client, am I stuck abandoning Maven and the jaxws plugin altogether? Is there a way to get a WSIT client to auto-generate? Or will I need to generate the client by hand? Let me know if you need any additional info beyond what I've written here. I'm deploying to Tomcat, although that doesn't seem to be an issue, as Maven seems happy to pull Metro into the deployed WAR file. Thanks in advance!

    Read the article

  • Crazy VS Designer Errors

    - by BlueRaja
    Here's a strange one. After renaming a class, one of my forms began giving me errors in the designer, refusing to open. Funny thing is, the form worked just fine when I ran the program. I began reverting my changes to deduce the problem. I have now reverted completely back to the last commit - in which I know the form was working in the designer - cleaned the solution, and deleted the bin/ and obj/ folders, as well as the *.suo file for good measure. The form still does not display in designer. Here are the errors it gives: Could not find 'MyNamespace.MyControl'. Please make sure that the assembly that contains this type is referenced. If this type is a part of your development project, make sure that the project has been successfully built. The variable 'myControl1' is either undeclared or was never assigned. The variable is both declared and assigned, and MyControl builds fine (again, the form works fine when the program is actually run). Stranger still, if I try to create a new form and drag a MyControl onto it, I get this Entity-Framework error: Failed to create component 'MyControl'. The error message follows: 'System.ArgumentException: The specified named connection is either not found in the configuration, not intended to be used with the EntityClient provider, or not valid. at System.Data.EntityClient.EntityConnection.ChangeConnectionString(String newConnectionString) at System.Data.EntityClient.EntityConnection..ctor(String connectionString) at System.Data.Objects.ObjectContect.CreateEntityConnection(String connectionString) etc. etc. There is nothing wrong with my connection string: it worked before, and, again, it works when I actually run the program (the control already exists on the old form from the previous commit). Any ideas whatsoever? I am completely at a loss. [Edit] The only significant code: MyControl.cs public MyControl() { _entities = new MyEFEntities(); //Entity-framework generated class } MyForm.Designer.cs private void InitializeComponent() { this.myControl1 = new MyNamespace.MyControl(); ... this.Controls.Add(this.myControl1); } MyEFDatabase.Designer.cs public MyEFEntities() : base("name=MyEFEntities", "MyEFEntities") { ... } App.Config <connectionStrings> <add name="MyEFEntities" connectionString="metadata=res://*/MyEFDatabase.csdl|res://*/MyEFDatabase.ssdl|res://*/MyEFDatabase.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=MyDatabaseServer;Initial Catalog=MyDatabase;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> I've tried the "replace &quot; with '" trick - didn't help. [Edit2] It is happening to new projects also, but not immediately. Only after fiddling around a bit (it has something to do with adding a many-to-one relationship that EF did not figure out on its own), but I can't figure out the exact steps to reproduce.

    Read the article

  • design of orm tool

    - by ifree
    Hello , I want to design a orm tool for my daily work, but I'm always worry about the mapping of foreign key. Here's part of my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; namespace OrmTool { [AttributeUsage(AttributeTargets.Property)] public class ColumnAttribute:Attribute { public string Name { get; set; } public SqlDbType DataType { get; set; } public bool IsPk { get; set; } } [AttributeUsage(AttributeTargets.Class,AllowMultiple=false,Inherited=false)] public class TableAttribute:Attribute { public string TableName { get; set; } public string Description { get; set; } } [AttributeUsage(AttributeTargets.Property)] public class ReferencesAttribute : ColumnAttribut { public Type Host { get; set; } public string HostPkName{get;set;} } } I want to use Attribute to get the metadata of Entity ,then mapping them,but i think it's really hard to get it done; public class DbUtility { private static readonly string CONNSTR = ConfigurationManager.ConnectionStrings["connstr"].ConnectionString; private static readonly Type TableType = typeof(TableAttribute); private static readonly Type ColumnType = typeof(ColumnAttribute); private static readonly Type ReferenceType = typeof(ReferencesAttribute); private static IList<TEntity> EntityListGenerator<TEntity>(string tableName,PropertyInfo[] props,params SqlParameter[] paras) { return null; } private static IList<TEntity> ResultList() { return null; } private static SqlCommand PrepareCommand(string sql,SqlConnection conn,params SqlParameter[] paras) { SqlCommand cmd = new SqlCommand(); cmd.CommandText = sql; cmd.Connection = conn; if (paras != null) cmd.Parameters.AddRange(paras); conn.Open(); return cmd; } } I don't know how to do the next step, if every Entity has it's own foreign key,how do I get the return result ? If the Entity like this: [Table(Name="ArtBook")] public class ArtBook{ [column(Name="id",IsPk=true,DataType=SqlDbType.Int)] public int Id{get;set;} [References(Name="ISBNId",DataType=SqlDataType.Int,Host=typeof(ISBN),HostPkName="Id")] public ISBN BookISBN{get;set;} public .....more properties. } public class ISBN{ public int Id{get;set;} public bool IsNative{get;set;} } If I read all ArtBooks from database and when I get a ReferencesAttribute how do I set the value of BookISBN?

    Read the article

  • Migration Core Data error code 134130

    - by magichero
    I want to do migration with 2 core database. I have read apple developer document. For the first database, I add some attributes (string, integer and date properties) to new version database. And follow all steps, I have done migration successfully with the first once. But the second database, I also add attributes (string, integer, date, transformable and binary-data properties) to new version database. And follow all steps (like the first database) but system return an error (134130). Here is the code: if (persistentStoreCoordinator_) { PMReleaseSafely(persistentStoreCoordinator_); } // Notify NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc postNotificationName:GCalWillMigrationNotification object:self]; // NSString *sourceStoreType = NSSQLiteStoreType; NSString *dataStorePath = [PMUtility dataStorePathForName:GCalDBWarehousePersistentStoreName]; NSURL *storeURL = [NSURL fileURLWithPath:dataStorePath]; BOOL storeExists = [[NSFileManager defaultManager] fileExistsAtPath:dataStorePath]; // NSError *error = nil; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; persistentStoreCoordinator_ = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:self.managedObjectModel]; [persistentStoreCoordinator_ addPersistentStoreWithType:sourceStoreType configuration:nil URL:storeURL options:options error:&error]; if (error != nil) { abort(); } error is not nil and below is log: Error Domain=NSCocoaErrorDomain Code=134130 "The operation couldn\u2019t be completed. (Cocoa error 134130.)" UserInfo=0x856f790 {URL=file://localhost/Users/greensun/Library/Application%20Support/iPhone%20Simulator/5.0/Applications/D10712DE-D9FE-411A-8182-C4F58C60EC6D/Library/Application%20Support/Promise%20Mail/GCalDBWarehouse.sqlite, metadata={type = immutable dict, count = 7, entries = 2 : {contents = "NSStoreModelVersionIdentifiers"} = {type = immutable, count = 1, values = ( 0 : {contents = ""} )} 4 : {contents = "NSPersistenceFrameworkVersion"} = {value = +386, type = kCFNumberSInt64Type} 6 : {contents = "NSStoreModelVersionHashes"} = {type = immutable dict, count = 2, entries = 0 : {contents = "SyncEvent"} = {length = 32, capacity = 32, bytes = 0xfdae355f55c13fbd0344415fea26c8bb ... 4c1721aadd4122aa} 1 : {contents = "ImportEvent"} = {length = 32, capacity = 32, bytes = 0x7676888f0d7eaff4d1f844343028ce02 ... 040af6cbe8c5fd01} } 7 : {contents = "NSStoreUUID"} = {contents = "51678BAC-CCFB-4D00-AF5C-8FA1BEDA6440"} 8 : {contents = "NSStoreType"} = {contents = "SQLite"} 9 : {contents = "_NSAutoVacuumLevel"} = {contents = "2"} 10 : {contents = "NSStoreModelVersionHashesVersion"} = {value = +3, type = kCFNumberSInt32Type} }, reason=Can't find model for source store} I try a lot of solutions but it does not work. I just add more attributes to 2 new version database, and succeed in migrating once. Thanks for reading and your help.

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • What do you expect from a package manager for Emacs

    - by tarsius
    Although several hundred Emacs Lisp libraries exist GNU Emacs does not have an (internal) package manager. I guess that most users would agree that it is currently rather inconvenient to find, install and especially keep up-to-date Emacs Lisp libraries. These pages make life a bit easier Emacs Lisp List - Problem: I see dead people (links). Emacswiki - Problem: May contain traces of nuts (malicious code). These are some package managers XEmacs package manager package.el - ELPA pases install.el install-elisp.el plugin.el use-package.el jem-pkg.el epkg/elm - the one I am working. And this are some packages that provide functionality that might be useful in a package manager ell.el - Browse the Emacs Lisp List genauto.el - helps generate autoloads for your elisp packages date-calc.el - date calculation and parsing routines strptime.el - partial implementation of POSIX date and time parsing wikirel.el - Visit relevant pages on the Emacs Wiki loadhist.el, lib-requires.el, elisp-depend.el - Commands to list Emacs Lisp library dependencies. project-root.el - Define a project root and take actions based upon it So I would like to know from you what you consider important/unimportant/supplementary... in a package manager for Emacs. Some ideas Many packages (incorporate the Emacs Lisp List and other lists of libraries). Only packages that have been tested. Support for more than one package archive (so people can choose between many/tested packages). Dependency calculated based on required features only. Dependencies take particular versions into account. Only use versions that have been released upstream. Use versions from version control systems if available. Packages are categorized. Packages can be uninstalled and updated not only installed. Support creating fork of upstream version of packages. Support publishing these forks. Support choosing a fork. After installation packages are activated. Generate autoloads. Integration with Emacswiki (see wikirel.el). Users can tag, comment ... packages and share that information. Only FSF-assigned/GPL/FOSS software or don't care about license. Package manager should be integrated in Emacs. Support contacting author. Lots of metadata. Suggest alternatives before installing a particular package. Some discussions about the subject at hand emacs-devel 20080801 comp.emacs 20021121 RationalElispPackaging I am hoping for these kinds of answers Pointers to more implementations, discussions etc. Lengthy descriptions of a set of features that make up your ideal package manager. Descriptions of one particular disired/undisired feature. This has the advantage that the regular voting mechanism allows us to see what features are most welcomed. Feel free to elaborate on my ideas from above. Surprise me.

    Read the article

  • C# XPath Not Finding Anything

    - by ehdv
    I'm trying to use XPath to select the items which have a facet with Location values, but currently my attempts even to just select all items fail: The system happily reports that it found 0 items, then returns (instead the nodes should be processed by a foreach loop). I'd appreciate help either making my original query or just getting XPath to work at all. XML <?xml version="1.0" encoding="UTF-8" ?> <Collection Name="My Collection" SchemaVersion="1.0" xmlns="http://schemas.microsoft.com/collection/metadata/2009" xmlns:p="http://schemas.microsoft.com/livelabs/pivot/collection/2009" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <FacetCategories> <FacetCategory Name="Current Address" Type="Location"/> <FacetCategory Name="Previous Addresses" Type="Location" /> </FacetCategories> <Items> <Item Id="1" Name="John Doe"> <Facets> <Facet Name="Current Address"> <Location Value="101 America Rd, A Dorm Rm 000, Chapel Hill, NC 27514" /> </Facet> <Facet Name="Previous Addresses"> <Location Value="123 Anywhere Ln, Darien, CT 06820" /> <Location Value="000 Foobar Rd, Cary, NC 27519" /> </Facet> </Facets> </Item> </Items> </Collection> C# public void countItems(string fileName) { XmlDocument document = new XmlDocument(); document.Load(fileName); XmlNode root = document.DocumentElement; XmlNodeList xnl = root.SelectNodes("//Item"); Console.WriteLine(String.Format("Found {0} items" , xnl.Count)); } There's more to the method than this, but since this is all that gets run I'm assuming the problem lies here. Calling root.ChildNodes accurately returns FacetCategories and Items, so I am completely at a loss. Thanks for your help!

    Read the article

  • Data denormalization and C# objects DB serialization

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem? Additional info My DB is highly normalised and I'm using Entity Framework, but for the purpose of having external super-fast fulltext search functionality I'm sacrificing a bit DB denormalisation. Just for the info I'm using SphinxSE on top of MySql. Sphinx would return row IDs that I would use to fast query my index optimised conglomerate table to get most important data from it much much faster than querying multiple tables all over my DB. My table would have columns like: RowID (auto increment) EntityID (of the actual entity - but not directly related because this would have to point to different tables) EntityType (so I would be able to get the actual entity if needed) DateAdded (record timestamp when it's been added into this table) Title Metadata (serialized data related to particular entity type) This table would be indexed with SPHINX indexer. When I would search for data using this indexer I would provide a series of EntityIDs and a limit date. Indexer would have to return a very limited paged amount of RowIDs ordered by DateAdded (descending). I would then just join these RowIDs to my table and get relevant results. So this won't actually be full text search but a filtering search. Getting RowIDs would be very fast this way and getting results back from the table would be much faster than comparing EntityIDs and DateAdded comparisons even though they would be properly indexed.

    Read the article

  • json problems with making a ruby on rails application

    - by Prince Merdz
    So I'm using Bitnami to learn Ruby on Rails. I have also previously tried the manual installation for ruby and rails and was met by the same problem so I thought I should try first the easy package deal of Bitnami. Anyway my problem with json is that it causes the bundle install to fail. First the auto bundle install that rails new does fails because of an ssl error. Which is easily solved by changing the source in the gemfile which is https to http. However when I try to bundle install it does another error when it tries to install json. C:\RubyStack-3.2.7-0\projects\testing>bundle install Fetching gem metadata from http://rubygems.org/......... Using rake (0.9.2.2) Using i18n (0.6.0) Using multi_json (1.3.6) Installing activesupport (3.2.8) Using builder (3.0.0) Installing activemodel (3.2.8) Using erubis (2.7.0) Using journey (1.0.4) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.1) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.3) Installing actionpack (3.2.8) Using mime-types (1.19) Using polyglot (0.3.3) Using treetop (1.4.10) Using mail (2.4.4) Installing actionmailer (3.2.8) Using arel (3.0.2) Using tzinfo (0.3.33) Installing activerecord (3.2.8) Installing activeresource (3.2.8) Using bundler (1.1.5) Using coffee-script-source (1.3.3) Using execjs (1.4.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Installing json (1.7.5) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension . C:/RUBYST~1.7-0/ruby/bin/ruby.exe extconf.rb creating Makefile make 0 [main] echo 5244 open_stackdumpfile: Dumping stack trace to echo.exe.sta ckdump make: *** [generator-i386-mingw32.def] Error 5 Gem files will remain installed in C:/RUBYST~1.7-0/ruby/lib/ruby/gems/1.9.1/gems /json-1.7.5 for inspection. Results logged to C:/RUBYST~1.7-0/ruby/lib/ruby/gems/1.9.1/gems/json-1.7.5/ext/j son/ext/generator/gem_make.out An error occured while installing json (1.7.5), and Bundler cannot continue. Make sure that `gem install json -v '1.7.5'` succeeds before bundling. This is the gem_make.out file it produces after trying to install json (btw windows also produces an error that echo.exe has stopped working while running the gem install json) C:/RUBYST~1.7-0/ruby/bin/ruby.exe extconf.rb creating Makefile make 0 [main] echo 5244 open_stackdumpfile: Dumping stack trace to echo.exe.stackdump make: *** [generator-i386-mingw32.def] Error 5 I can't even start learning ror for the setup is already a huge pain. (btw I have no prior experience with web frameworks, just desktop programming). help?

    Read the article

  • Design-time failure: WPF, Usercontrols and Namespaces

    - by Simon Woods
    Hi I have a very simple WPF project comprising a Window and Usercontrol. I'm very much in a learning phase. It works fine when I run it. However, I am unable to see the form in design time. The problem, I believe is something to do with namespaces, but I don't understand where. It may well be a simple error Main Window XML <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:views="clr-namespace:UserLogin" x:Class="UserLogin.MainView" x:Name="MainViewWindow" mc:Ignorable="d" Title="Login" Height="141" Width="347" > <Grid> <views:LoginView /> </Grid> </Window> Main Window CodeBehind Imports Microsoft.VisualBasic Imports System Imports System.Windows Imports UserLogin Namespace UserLogin Partial Public Class MainView Inherits System.Windows.Window Public Sub New() InitializeComponent() End Sub End Class End Namespace Usercontrol XAML <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" x:Class="UserLogin.LoginView" x:Name="LoginViewControl" mc:Ignorable="d" d:DesignHeight="96" d:DesignWidth="298"> <Grid Height="96" Width="298"> <Button Command="{Binding OKCommand}" Height="21" Margin="0,0,90,16" Name="btnOK" VerticalAlignment="Bottom" HorizontalAlignment="Right" Width="76">OK</Button> <Button Command="{Binding CancelCommand}" Height="21" HorizontalAlignment="Right" Margin="0,0,9,16" Name="btnCancel" VerticalAlignment="Bottom" Width="75">Cancel</Button> <Label Height="23" HorizontalAlignment="Left" Margin="10,5,0,0" Name="Label1" VerticalAlignment="Top" Width="85">Name:</Label> <Label HorizontalAlignment="Left" Margin="10,32,0,0" Name="Label2" Width="85" Height="29" VerticalAlignment="Top">Password:</Label> <TextBox Margin="0,31,6,0" Name="txtPassword" Height="22" VerticalAlignment="Top" HorizontalAlignment="Right" Width="182" /> <ComboBox Height="22" Margin="110,6,6,0" Name="cboNames" VerticalAlignment="Top" /> </Grid> </UserControl> Usercontrol CodeBehind Imports Microsoft.VisualBasic Imports System Imports UserLogin Namespace UserLogin Partial Public Class LoginView Inherits System.Windows.Controls.UserControl Public Sub New() InitializeComponent() End Sub End Class End Namespace I think I'm missing something this namespace xmlns:views="clr-namespace:UserLogin" since intellisense doesn't give me the usercontrol declared within it in the XAML designer but rather reports the error "Unable to load the metadata for the assembly ... etc etc" Thx for any suggestions Simon

    Read the article

  • WCF Self Host Service - Endpoints in C#

    - by Kyle
    My first few attempts at creating a self hosted service. Trying to make something up which will accept a query string and return some text but have have a few issues: All the documentation talks about endpoints being created automatically for each base address if they are not found in a config file. This doesn't seem to be the case for me, I get the "Service has zero application endpoints..." exception. Manually specifying a base endpoint as below seems to resolve this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.ServiceModel.Description; namespace TestService { [ServiceContract] public interface IHelloWorldService { [OperationContract] string SayHello(string name); } public class HelloWorldService : IHelloWorldService { public string SayHello(string name) { return string.Format("Hello, {0}", name); } } class Program { static void Main(string[] args) { string baseaddr = "http://localhost:8080/HelloWorldService/"; Uri baseAddress = new Uri(baseaddr); // Create the ServiceHost. using (ServiceHost host = new ServiceHost(typeof(HelloWorldService), baseAddress)) { // Enable metadata publishing. ServiceMetadataBehavior smb = new ServiceMetadataBehavior(); smb.HttpGetEnabled = true; smb.MetadataExporter.PolicyVersion = PolicyVersion.Policy15; host.Description.Behaviors.Add(smb); host.AddServiceEndpoint(typeof(IHelloWorldService), new BasicHttpBinding(), baseaddr + "SayHello"); //for some reason a default endpoint does not get created here host.Open(); Console.WriteLine("The service is ready at {0}", baseAddress); Console.WriteLine("Press to stop the service."); Console.ReadLine(); // Close the ServiceHost. host.Close(); } } } } I still think I'm doing something wrong as I don't get the normal "This is a web service...etc..." page when I load up the url How would I go about setting this up to return the value of name in SayHello(string name) when requested thusly: localhost:8080/HelloWorldService/SayHello?name=kyle Do I have to create an endpoing for the SayHello contract as well? I'm trying to walk before running, but this just seems like crawling...Service has zero application endpoints...

    Read the article

  • memcpy segmentation fault on linux but not os x

    - by Andre
    I'm working on implementing a log based file system for a file as a class project. I have a good amount of it working on my 64 bit OS X laptop, but when I try to run the code on the CS department's 32 bit linux machines, I get a seg fault. The API we're given allows writing DISK_SECTOR_SIZE (512) bytes at a time. Our log record consists of the 512 bytes the user wants to write as well as some metadata (which sector he wants to write to, the type of operation, etc). All in all, the size of the "record" object is 528 bytes, which means each log record spans 2 sectors on the disk. The first record writes 0-512 on sector 0, and 0-15 on sector 1. The second record writes 16-512 on sector 1, and 0-31 on sector 2. The third record writes 32-512 on sector 2, and 0-47 on sector 3. ETC. So what I do is read the two sectors I'll be modifying into 2 freshly allocated buffers, copy starting at record into buf1+the calculated offset for 512-offset bytes. This works correctly on both machines. However, the second memcpy fails. Specifically, "record+DISK_SECTOR_SIZE-offset" in the below code segfaults, but only on the linux machine. Running some random tests, it gets more curious. The linux machine reports sizeof(Record) to be 528. Therefore, if I tried to memcpy from record+500 into buf for 1 byte, it shouldn't have a problem. In fact, the biggest offset I can get from record is 254. That is, memcpy(buf1, record+254, 1) works, but memcpy(buf1, record+255, 1) segfaults. Does anyone know what I'm missing? Record *record = malloc(sizeof(Record)); record->tid = tid; record->opType = OP_WRITE; record->opArg = sector; int i; for (i = 0; i < DISK_SECTOR_SIZE; i++) { record->data[i] = buf[i]; // *buf is passed into this function } char* buf1 = malloc(DISK_SECTOR_SIZE); char* buf2 = malloc(DISK_SECTOR_SIZE); d_read(ad->disk, ad->curLogSector, buf1); d_read(ad->disk, ad->curLogSector+1, buf2); memcpy(buf1+offset, record, DISK_SECTOR_SIZE-offset); memcpy(buf2, record+DISK_SECTOR_SIZE-offset, offset+sizeof(Record)-sizeof(record->data));

    Read the article

  • Maven mercurial extension constantly fails

    - by TheLQ
    After 2+ hours I was able to get the maven-scm-provider-hg extension (for pushing to mercurial repos from Maven) semi working, meaning that it was executing commands instead of just giving errors. However I think I've run into a wall with this error [INFO] [deploy:deploy {execution: default-deploy}] [INFO] Retrieving previous build number from pircbotx.googlecode.com [INFO] Removing C:\DOCUME~1\Owner\LOCALS~1\Temp\wagon-scm1210107000.checkout\pir cbotx\pircbotx\1.3-SNAPSHOT [INFO] EXECUTING: cmd.exe /X /C "hg clone -r tip https://*SNIP*@site.pircbotx.googlecode.com/hg/maven2/snapshots/pircbotx/pircbotx/1.3-SNAPSHOT C:\DOCUME~1\Owner\LOCALS~1\Temp\wagon-scm1210107000.checkout\pircbotx\pircbotx\1.3-SNAPSHOT" [INFO] EXECUTING: cmd.exe /X /C "hg locate" [INFO] repository metadata for: 'snapshot pircbotx:pircbotx:1.3-SNAPSHOT' could not be found on repository: pircbotx.googlecode.com, so will be created Uploading: scm:hg:https://site.pircbotx.googlecode.com/hg/maven2/snapshots/pircbotx/pircbotx/1.3-SNAPSHOT/pircbotx-1.3-SNAPSHOT.jar [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error deploying artifact: Error listing repository: No such command 'list'. What on earth would cause that error? I'm on a Windows box, so any commands that aren't commands give "'list' is not recognized as an internal or external command...", not "No such command 'list'." POM <build> <extensions> <extension> <groupId>org.apache.maven.scm</groupId> <artifactId>maven-scm-provider-hg</artifactId> <version>1.4</version> </extension> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-scm</artifactId> <version>1.0-beta-7</version> </extension> </extensions> ... <distributionManagement> <snapshotRepository> <id>pircbotx.googlecode.com</id> <name>PircBotX Site</name> <url>scm:hg:https://site.pircbotx.googlecode.com/hg/maven2/snapshots</url> <uniqueVersion>false</uniqueVersion> </snapshotRepository> </distributionManagement> Mercurial version W:\programming\pircbot-hg>hg version Mercurial Distributed SCM (version 1.7.2) Any suggestions?

    Read the article

  • Need help with joins in sqlalchemy

    - by Steve
    I'm new to Python, as well as SQL Alchemy, but not the underlying development and database concepts. I know what I want to do and how I'd do it manually, but I'm trying to learn how an ORM works. I have two tables, Images and Keywords. The Images table contains an id column that is its primary key, as well as some other metadata. The Keywords table contains only an id column (foreign key to Images) and a keyword column. I'm trying to properly declare this relationship using the declarative syntax, which I think I've done correctly. Base = declarative_base() class Keyword(Base): __tablename__ = 'Keywords' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, ForeignKey('Images.id', ondelete='CASCADE'), primary_key=True) keyword = Column(String(32), primary_key=True) class Image(Base): __tablename__ = 'Images' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(256), nullable=False) keywords = relationship(Keyword, backref='image') This represents a many-to-many relationship. One image can have many keywords, and one keyword can relate back to many images. I want to do a keyword search of my images. I've tried the following with no luck. Conceptually this would've been nice, but I understand why it doesn't work. image = session.query(Image).filter(Image.keywords.contains('boy')) I keep getting errors about no foreign key relationship, which seems clearly defined to me. I saw something about making sure I get the right 'join', and I'm using 'from sqlalchemy.orm import join', but still no luck. image = session.query(Image).select_from(join(Image, Keyword)).\ filter(Keyword.keyword == 'boy') I added the specific join clause to the query to help it along, though as I understand it, I shouldn't have to do this. image = session.query(Image).select_from(join(Image, Keyword, Image.id==Keyword.id)).filter(Keyword.keyword == 'boy') So finally I switched tactics and tried querying the keywords and then using the backreference. However, when I try to use the '.images' iterating over the result, I get an error that the 'image' property doesn't exist, even though I did declare it as a backref. result = session.query(Keyword).filter(Keyword.keyword == 'boy').all() I want to be able to query a unique set of image matches on a set of keywords. I just can't guess my way to the syntax, and I've spent days reading the SQL Alchemy documentation trying to piece this out myself. I would very much appreciate anyone who can point out what I'm missing.

    Read the article

  • What is the best way, if possible, to send information from a Java PrintStream to a JTextPane?

    - by Daniel Reeves
    In Java, I have a package that translates XML metadata from one standard to another. This package is ultimately accessed through a single function and sends all of its output through a PrintStream object. The output sent is just a status of each file and whether or not it was translated. This is pretty fine and dandy if I'm just printing to System.out, but I'm actually wanting to print this to a JTextPane while it translates (kind of like a progress text box). It wouldn't be a big deal to just print the status after it was done translating the XML, but since there may be thousands of XML files, that's just not feasible. One thing that I've tried is to use a thread that takes all of the information from the PrintStream (which is attached to a ByteArrayOutputStream) and let it send any new information to the text pane. Unfortunately, this still sends the information all at once at the end of the translation. This does work correctly for System.out. Here's the code that does the translation and tries to show the output: public class ConverterGUI extends javax.swing.JFrame { boolean printToResultsBox = false; PrintStream printStream = null; ByteArrayOutputStream baos = null; private class ResultsPrinter implements Runnable { public ResultsPrinter() { baos = new ByteArrayOutputStream(); printStream = new PrintStream(baos); } public void run() { String tempString = ""; while (printToResultsBox) { try { if (!baos.toString().equals(tempString)) { tempString = baos.toString(); resultsBox.setText(tempString); } } catch (Exception ex) { } } } } ... ResultsPrinter rp = new ResultsPrinter(); Thread thread = new Thread(rp); thread.start(); // Do the translation. try { printToResultsBox = true; boolean success = false; TranslationEngine te = new TranslationEngine(); // fileOrFolderToConvert is a text box in the GUI. // linkNeeded and destinationFile are just parameters for the translation process. success = te.translate(fileOrFolderToConvert.getText(), linkNeeded, destinationFile, printStream); if (success) { printStream.println("File/folder translation was a success."); } resultsBox.setText(baos.toString()); } catch (Exception ex) { printStream.println("File translation failed."); } finally { printToResultsBox = false; } ... } Ultimately, this code prints out to the JTextPane just fine after all the translation is done but not during. Any suggestions? Do I need to change the PrintStream to something else?

    Read the article

  • Hide public method used to help test a .NET assembly

    - by ChrisW
    I have a .NET assembly, to be released. Its release build includes: A public, documented API of methods which people are supposed to use A public but undocumented API of other methods, which exist only in order to help test the assembly, and which people are not supposed to use The assembly to be released is a custom control, not an application. To regression-test it, I run it in a testing framework/application, which uses (in addition to the public/documented API) some advanced/undocumented methods which are exported from the control. For the public methods which I don't want people to use, I excluded them from the documentation using the <exclude> tag (supported by the Sandcastle Help File Builder), and the [EditorBrowsable] attribute, for example like this: /// <summary> /// Gets a <see cref="IEditorTransaction"/> instance, which helps /// to combine several DOM edits into a single transaction, which /// can be undone and redone as if they were a single, atomic operation. /// </summary> /// <returns>A <see cref="IEditorTransaction"/> instance.</returns> IEditorTransaction createEditorTransaction(); /// <exclude/> [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); This successfully removes the method from the API documentation, and from Intellisense. However, if in a sample application program I right-click on an instance of the interface to see its definition in the metadata, I can still see the method, and the [EditorBrowsable] attribute as well, for example: // Summary: // Gets a ModelText.ModelDom.Nodes.IEditorTransaction instance, which helps // to combine several DOM edits into a single transaction, which can be undone // and redone as if they were a single, atomic operation. // // Returns: // A ModelText.ModelDom.Nodes.IEditorTransaction instance. IEditorTransaction createEditorTransaction(); // [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); Questions: Is there a way to hide a public method, even from the meta data? If not then instead, for this scenario, would you recommend making the methods internal and using the InternalsVisibleTo attribute? Or would you recommend some other way, and if so what and why? Thank you.

    Read the article

  • error with redirect using listener JSF 2.0

    - by Ray
    I have a index.xhtml page <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:ui="http://java.sun.com/jsf/facelets"> <f:view> <ui:insert name="metadata" /> <f:event type="preRenderView" listener="#{item.show}" /> <h:body></h:body> </f:view> </html> And in bean class with scope session this method public void show() throws IOException, DAOException { ExternalContext externalContext = FacesContext.getCurrentInstance() .getExternalContext(); //smth String rootPath = externalContext.getRealPath("/"); String realPath = rootPath + "pages\\template\\body\\list.xhtml"; externalContext.redirect(realPath); } i think that I should redirect to next page but I have "browser can't show page" and list.xhtml (if I do this page as welcome-page I haven't error, it means that error connected with redirect) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:ui="http://java.sun.com/jsf/facelets"> <h:body> <ui:composition template="/pages/layouts/mainLayout.xhtml"> <ui:define name="content"> <h:form></h:form></ui:define></ui:composition> </h:body> </html> in consol i didn't have any error. in web.xml <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> What can be the reason this problem?

    Read the article

  • MVC Entity Framework: Cannot open user default database. Login failed.

    - by Michael
    This type of stuff drives me nuts. I'm having trouble finding the exact issue that I'm having, maybe I just don't know the terminology. Anyway, I had a working website using MVC and Entity Framework, but then I coded an error in a partial view page (ascx). Then all of a sudden I started to get this message. Cannot open user default database. Login failed. Login failed for user 'NT AUTHORITY\SYSTEM' I've found plenty of suggestions about opening SQL Server Management Studio, Double Click on Security, Double Click on Logins, Double click on NT AUTHORITY\SYSTEM and then double click on User Mapping. In this view I'm suppose to check the box for my database so that this user is mapped to this login. However, since I created my database in Visio Studio 2008 as part of my solution, it doesn't show up to allow me to click on it. So what do I do now? What drives me nuts is that everything was working fine. I was using my computer name to access the website and everything was working fine until the coding error. I've fix the error but still getting the error. I should also mention that this error started yesterday too around the same time but later cleared itself up. If I use localhost to access the site, it works just fine. IIS7 configuration for my website: Application Pool = DefaultAppPool Physical Path Credentials Logon = ClearText With in connection strings. I do see the one for my database in this solution. Entry Type is local metadata=res://*/Models.DataModel.csdl|res://*/Models.DataModel.ssdl|res://*/Models.DataModel.msl; provider=System.Data.SqlClient; provider connection string="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\FFBall.mdf;Integrated Security=True;User Instance=True;MultipleActiveResultSets=True" And somewhere I remember changing the identity from Network Service to LocalSystem. Because when I first stared I was getting this same message, but I changed this value and it started working. I saw that suggested somewhere too but I do not recall. Wait I remember now, I believe in IIS7, under Application Pools, DefaultAppPool identity is set to LocalSystem. Additional things I've tried. Restart the computer Recycle the application pool. Antivirus isn't running. Any help would be appreciated. Thank you in advance.

    Read the article

  • web design PSD to html -> more direct ways?

    - by Assembler
    At work I see one colleague designing a site in Photoshop/Fireworks, I see another taking this data, slicing it up and using Dreamweaver to rebuild the same from scratch. It seems like too much mucking around! I know that Photoshop can output a tables based HTML, and Fireworks will create divs with absolute positioning; neither appear to be very helpful. Admittedly, I haven't tried much of (DW/FW) (CS4/CS3) since becoming a programmer, so I don't know if new versions are addressing this work flow issue, but are we still double handling things? Can we attach some sort of layout metadata (this is a rollover button, this will be a SWF, this will be text, this logo will hide "xyz" <h1> text etc) to slices to aid in layout generation? are there some secret tools which assist in this conversion process? Or are we still restricted to doing things by hand? The frustration continues when said hand built page needs to be reworked again to fit Smarty Templates/Wordpress/generic CMS. I acknowledge that designers need to be free of systems to be able to do whatever, but most conventional sites have: a header with navigation a sidebar with more links the main content part maybe another sidebar a footer Given the similarity of a lot of components, shouldn't there be a more systematic approach to going from sliced designs to functional HTML? Or am I over-simplifying things? -edit- Mmmmm.... I suppose I will accept an answer, but they weren't really what I was looking for. It just seems like designing the DOM is a bit of holy grail ("It's only a model!"), and maybe with all the "groovy" things you can do with HTML and Javascript, it would be mighty hard work, but with a set of constraints (that 960 stuff looks interesting), some well designed reset style sheets and a bit of... fairy dust? we should be able to improve the work flow. Photoshop's tables by themselves are pretty much useless, I agree, but surely we can take this data, and then select a group of cells and say "right, this is a text div, overflow:auto" or "these cells are an image block, style it with the same height/width as the selected area". Admittedly here at work there are other elephants in the room that need to make their formal introductions to management, but some parts of the designpage workflow seem... uneducated at best.

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >