Search Results

Search found 7222 results on 289 pages for 'storage cells'.

Page 136/289 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • In this context with views in a tree, which class should perform the task?

    - by Jhonny 8
    Imagine that I have this context: A main view containing a table containing some cells. Each one of them with their own controller and view files. In the main view, I have an object "Person", with 3 different IDs. Depending on certain conditions (let say, time of the day), I have to choose one of them and display it in the cell. My question is, should the main view pass the whole object to the table, and this one to the cell, and the cell will calculate the ID that it will be shown? or, The main view calculates this parameter, and send the result to the table and this to the cell? Is a question focused on OO design, which one of this approaches is more suitable in an OO design and why?

    Read the article

  • links for 2011-02-15

    - by Bob Rhubart
    Why the hybrid cloud model is the best approach | Cloud Computing - InfoWorld Although some cloud providers look at the hybrid model as blasphemy, there are strong reasons for them to adopt it, says David Linthicum.  (tags: davidlinthicum cloud) Exadata Part V: Monitoring with Database Control The Oracle Instructor Uwe Hesse shows how "we can use Oracle Enterprise Manager Database Control to monitor an Exadata Database Machine, especially the Storage Servers (Cells). " (tags: oracle exadata) ATG Live Webcast Feb. 24th: Using the EBS 12 SOA Adapter (Oracle E-Business Suite Technology) "This live one-hour webcast will offer a review of the Service Oriented Architecture (SOA) capabilities within E-Business Suite R12 focusing on the E-Business Suite Adapter." (tags: oracle soa) Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS (Oracle Fusion Middleware für den Finanzsektor) "Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration." (tags: oracle adf)

    Read the article

  • How to ramp up my data structures skills after a long hibernation

    - by Anon
    I was pretty good with algorithms and data structures once, a long long time ago. Since then, I programmed professionally, and then went to manage a small team, which totally shot my tech skills in this field back. I've decided I want to be a developer again, and work for Google. The thing is, I'm so out of practice, that if I were to be interviewed right now I would surely flunk out in 10 minutes. What training program would you recommend for me to get back into shape? I already started this weekend by going back to the absolute basics and implementing a few sort algorithms, linked list, and hash table. Next, I think I'll read through the entire course material on the other basic data structures and graph algorithms. I want to find a focused set of practical exercises I can do in a relatively short amount of time, to juggle the old brain cells. I know this stuff - I just need to remind myself that I know it.

    Read the article

  • How can I turn off calculated columns in an Excel table from a macro using VBA? [migrated]

    - by user41293
    I am working on a macro that inserts formulas into a cell in an Excel table. The Excel table does the automatic filling of columns and fills all the cells in that column with the formula, but all I want is one cell to have the formula. I cannot just turn off automatic formula for tables as I need to have other people use this worksheet on their systems. Is there a way to turn off the automatic filling of formulas in a table using VBA in a macro? It just needs to be temporary: I just want to turn it off, put in my formulas, then turn it back on.

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • I need some help creating a non-binary tree (or some other data structure that will better solve my problem)

    - by EDO
    I have about ten lists of numbers and some strings. Each list has about <= 30K lines. Each line on a list has a distinct number. I need to build an efficient way of finding all the lines in each list that has the same 'control' number (or key for dB guys) and comparing what is in their string parts. I am writing this in Java. I have thought about using trees but my brain cells are about burnt now. I need some help.

    Read the article

  • Comparisons of Javascript 'data grids'?

    - by Joe
    I've found plenty of questions between here and StackExchange of people asking for the 'best' data grid / data table, or one that has a particular feature, and plenty of lists out there (of various ages) listing the various data grid implementations ... but is anyone aware of any matrix of what features the various solutions implement? (eg, allow shift-click to select multiple; support checkboxes for selection; can update a regular table in-place; allow editing of cells; support websql or indexeddb for local caching; which browsers they support; infinite scroll; etc.) There's a generic 'javascript framework' comparison on wikipedia, which would be the sort of thing I'm looking for, but it doesn't go into detail on data grids. (which makes sense, as so many are extensions, not core features of those frameworks, and in the case of jQuery, there's lots of 'em.)

    Read the article

  • Linq The specified type 'string' is not a valid provider type.

    - by Joe Pitz
    Using Linq to call a stored procedure that passes a single string, The stored procedure returns a data set row that contains a string and an int. Code: PESQLDataContext pe = new PESQLDataContext(strConnStr); pe.ObjectTrackingEnabled = false; gvUnitsPassed.DataSource = pe.PassedInspection(Line); gvUnitsPassed.DataBind(); pe.dispose(); When the code runs an exception gets called below: The exception is thrown at the IExecuteResult result = statement: Enclosed is my result class in the designer.cs file. [Function(Name = "dbo.PassedInspection")] public ISingleResult<PassedInspectionResult> PassedInspection([Parameter(Name = "Model", DbType = "VarChar(4)")] string model) { IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), model); return ((ISingleResult<PassedInspectionResult>)(result.ReturnValue)); } public partial class PassedInspectionResult { private string _Date; private int _Passed; public PassedInspectionResult() { } [Column(Storage = "_Date", DbType = "string NULL")] public string Date { get { return this._Date; } set { if ((this._Date != value)) { this._Date = value; } } } [Column(Storage = "_Passed", DbType = "Int NULL")] public int Passed { get { return this._Passed; } set { if ((this._Passed != value)) { this._Passed = value; } } } } } I have other stored procedures with similar arguments that run just fine. Thanks

    Read the article

  • What is wrong with this solution? (Perm-Missing-Elem codility test)

    - by user2956907
    I have started playing with codility and came across this problem: A zero-indexed array A consisting of N different integers is given. The array contains integers in the range [1..(N + 1)], which means that exactly one element is missing. Your goal is to find that missing element. Write a function: int solution(int A[], int N); that, given a zero-indexed array A, returns the value of the missing element. For example, given array A such that: A[0] = 2 A[1] = 3 A[2] = 1 A[3] = 5 the function should return 4, as it is the missing element. Assume that: N is an integer within the range [0..100,000]; the elements of A are all distinct; each element of array A is an integer within the range [1..(N + 1)]. Complexity: expected worst-case time complexity is O(N); expected worst-case space complexity is O(1), beyond input storage (not counting the storage required for input arguments). I have submitted the following solution (in PHP): function solution($A) { $nr = count($A); $totalSum = (($nr+1)*($nr+2))/2; $arrSum = array_sum($A); return ($totalSum-$arrSum); } which gave me a score of 66 of 100, because it was failing the test involving large arrays: "large_range range sequence, length = ~100,000" with the result: RUNTIME ERROR tested program terminated unexpectedly stdout: Invalid result type, int expected. I tested locally with an array of 100.000 elements, and it worked without any problems. So, what seems to be the problem with my code and what kind of test cases did codility use to return "Invalid result type, int expected"?

    Read the article

  • What is wrong with Paperclip+ImageMagick on Heroku?

    - by Yuri
    UPD class User < ActiveRecord::Base Paperclip.options[:swallow_stderr] = false has_attached_file :photo, :styles => { :square => "100%", :large => "100%" }, :convert_options => { :square => "-auto-orient -geometry 70X70#", :large => "-auto-orient -geometry X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' validates_attachment_size :photo, :less_than => 5.megabyte end Works great on local machine, but gives me an error on Heroku: There was an error processing the thumbnail for stream.20143 The thing is I want to auto-orient photos before resizing, so they resized properly. The only working variant now(thanks to jonnii) is resizing without auto-orient: ... as_attached_file :photo, :styles => { :square => "70X70#", :large => "X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' ... How to pass additional convert options to paperclip on Heroku?

    Read the article

  • Problem with Mapping Linq-to-Sql on different Types

    - by csharpnoob
    Hi, maybe someone can help. I want to have on mapped Linq-Class different Datatype. This is working: private System.Nullable<short> _deleted = 1; [Column(Storage = "_deleted", Name = "deleted", DbType = "SmallInt", CanBeNull = true)] public System.Nullable<short> deleted { get { return this._deleted; } set { this._deleted = value; } } Sure thing. But no when i want to place some logic for boolean, like this: private System.Nullable<short> _deleted = 1; [Column(Storage = "_deleted", Name = "deleted", DbType = "SmallInt", CanBeNull = true)] public bool deleted { get { if (this._deleted == 1) { return true; } return false; } set { if(value == true) { this._deleted = (short)1; }else { this._deleted = (short)0; } } } I get always runtime error: [TypeLoadException: GenericArguments[2], "System.Nullable`1[System.Int16]", on 'System.Data.Linq.Mapping.PropertyAccessor+Accessor`3[T,V,V2]' violates the constraint of type parameter 'V2'.] I can't change the database to bit.. I need to have casting in mapping class.

    Read the article

  • Extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I'm trying msidb.exe and orca.exe The documentation for orca states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines, usually starting with this: orca -x theDirectory theModule.msm I use "theDirectory" as whatever empty folder I want. Like I said - it didn't do anything. Then I tried msidb, where a couple of attempts I've made look like this: msidb -d theModule.msm -w {storage} msidb -d theModule.msm -x {stream} In both cases, I don't know what to insert for {storage} or {stream} to make it happy - I don't know what those represent. Can someone explain what I'm doing wrong with the command line options? Is there any other tool that can do this?

    Read the article

  • SQL Server slow in production environment

    - by Lieven Cardoen
    I have a weird problem in a customer's production environment. I can't give any details on the infrastructure, except that SQL server runs on a virtual server. The data, log and filestream file are on another storage server (data and filestream together and log on a separate server). In our local Test environment, there's one particular query that executes with these durations: first we clear the cache 300ms (First time it takes longer, but from then on it's cached.) 20ms 15ms 17ms In the customer's production environment, the SQL Server is more powerful, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms 2600ms 2400ms The servers in the customer's production environment are more powerful but they do have virtual servers (we don't). What could be the cause... Not enough memory? Fragmentation? Physical storage? How would you tackle this performance problem? EDIT: Some people have asked me if the data set is equal and it is. I restored their database on our environment. It's true that this was the first thing I looked at. (@Everyone: I added the edit because it will be the first thing that many will think off).

    Read the article

  • How do you use asynchronous ORMs without huge callback chains?

    - by hornairs
    I'm using the relatively immature Joose Javascript ORM plugin (project page) to persist objects in an Appcelerator Titanium (company page) mobile project. Since it's client side storage, the application has to check to see if the database is initialized before starting up the ORM since it inspects the DB tables to construct the classes. My problem is that this sequence of operations (and if this one is like this, other things down the road) takes a lot of callbacks to complete. I have a lot of jumping around in the code that isn't apparent to a maintainer and results in some complex call graphs and whatnot. So, I ask these questions: How would you asynchronously initialize a database and populate it with seed data using an ORM that needs the schema to be correct to function? Do you have any general strategies or links for async/event driven programming and keeping the call graph simple and understandable? Do you have any suggestions for Javascript ORMs/meta object systems that work with HTML 5 as a storage engine and are hopefully framework agnostic? Am I just a big newb and should be able to work this out with ease? Thanks folks!

    Read the article

  • jQuery not and classes

    - by Giles B
    Hi Guys, I have 2 anchor links (a.selector) and when one is clicked it has a class applied to it of 'active-arrow' and the click also removes the class of the same name from the other anchor as well as lowering the opacity to 0.2. I then want to have a fade effect when the user hovers over the anchor that doesn't have 'active-arrow' applied to it so that it goes to full opacity when mouseenters and back to 0.2 when mouseleaves. The problem im having is that both .not and :not don't seem to be working as expected, the hover effect works but if I click on the anchor whilst hovering the 'active-arrow' class is applied but when mouseleaves the opacity is faded down to 0.2 again even though the 'active-arrow' class is applied. Also the hover then doesn't work for the other a link which has had 'active-arrow' removed. Bit of a hard one to explain so heres some code that hopefully helps a bit. *//If a.selector doesn't have the class 'active-arrow' then run the hoverFade function* $("a.selector").not(".active-arrow").hoverFade(); //Functions for first element $('a.selector-1').click(function () { $('a.selector-2').removeClass('active-arrow'); //Remove background image from corresponding element $('ul#storage-items-2').fadeOut(1200).addClass('hide'); //Fade out then hide corresponding list $(this).addClass('active-arrow', 'fast'); //Add background image to current element $('ul#storage-items-1').removeClass('hide').fadeIn(1800); //Unhide and fade in the list $('a.selector-2').fadeTo(500, 0.2); //Fade corresponding element $(this).fadeTo(800, 1);//Fade this element to full opacity }); I only included the code for teh first anchor (a.selector-1) as the code for the second anchor is identical but just changes the class names to a.selector-2. Also the hoverFade function is in a seperate file so we can re-use it. jQuery.fn.hoverFade = function() { return this.each(function(){ $(this).hover( function () { $(this).fadeTo(500, 0.8); }, function () { $(this).fadeTo(500, 0.2); }); }); } Each anchor link fades in and fades out a UL as well. Any help is most appreciated Thanks Giles

    Read the article

  • How to pass additional convert options to paperclip on Heroku?

    - by Yuri
    UPD class User < ActiveRecord::Base Paperclip.options[:swallow_stderr] = false has_attached_file :photo, :styles => { :square => "100%", :large => "100%" }, :convert_options => { :square => "-auto-orient -geometry 70X70#", :large => "-auto-orient -geometry X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' validates_attachment_size :photo, :less_than => 5.megabyte end Works great on local machine, but gives me an error on Heroku: There was an error processing the thumbnail for stream.20143 The thing is I want to auto-orient photos before resizing, so they resized properly. The only working variant now(thanks to jonnii) is resizing without auto-orient: ... as_attached_file :photo, :styles => { :square => "70X70#", :large => "X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' ... How to pass additional convert options to paperclip on Heroku?

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • C#, Asp.net Uploading files to file server...

    - by Imcl
    Using the link below, I wrote a code for my application. I am not able to get it right though, Please refer the link and help me ot with it... http://stackoverflow.com/questions/263518/c-uploading-files-to-file-server The following is my code:- protected void Button1_Click(object sender, EventArgs e) { filePath = FileUpload1.FileName; try { WebClient client = new WebClient(); NetworkCredential nc = new NetworkCredential(uName, password); Uri addy = new Uri("\\\\192.168.1.3\\upload\\"); client.Credentials = nc; byte[] arrReturn = client.UploadFile(addy, filePath); arrReturn = client.UploadFile(addy, filePath); Console.WriteLine(arrReturn.ToString()); } catch (Exception ex) { Console.WriteLine(ex.Message); } } I also used:- File.Copy(filePath, "\\192.168.1.3\upload\"); The following line doesnt execute... byte[] arrReturn = client.UploadFile(addy, filePath); tried changing it to:- byte[] arrReturn = client.UploadFile("\\192.168.1.3\upload\", filePath); IT still doesnt work...Any solution to it?? I basically want to transfer a file from the client to the file storage server without actually loggin into the server so that the client cannot access the storage location on the server directly...

    Read the article

  • Replacing a Namespace with XSLT

    - by er4z0r
    Hi I want to work around a 'bug' in certain RSS-feeds, which use an incorrect namespace for the mediaRSS module. I tried to do it by manipulating the DOM programmatically, but using XSLT seems more flexible to me. Example: <media:thumbnail xmlns:media="http://search.yahoo.com/mrss" url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> <media:thumbnail url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> Where the namespace must be http://search.yahoo.com/mrss/ (mind the slash). This is my stylesheet: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="//*[namespace-uri()='http://search.yahoo.com/mrss']"> <xsl:element name="{local-name()}" namespace="http://search.yahoo.com/mrss/" > <xsl:apply-templates select="@*|*|text()" /> </xsl:element> </xsl:template> </xsl:stylesheet> Unfortunately the result of the transformation is an invalid XML and my RSS-Parser (ROME Library) does not parse the feed anymore: java.lang.IllegalStateException: Root element not set at org.jdom.Document.getRootElement(Document.java:218) at com.sun.syndication.io.impl.RSS090Parser.isMyType(RSS090Parser.java:58) at com.sun.syndication.io.impl.FeedParsers.getParserFor(FeedParsers.java:72) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:273) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:251) ... 8 more What is wrong with my stylesheet?

    Read the article

  • Specifying ASP.NET MVC attributes for auto-generated data models

    - by Lyubomyr Shaydariv
    Hello to everyone. I'm very new to ASP.NET MVC (as well as ASP.NET in general), and going to gain some knowledge for this technology, so I'm sorry I can ask some trivial questions. I have installed ASP.NET MVC 3 RC1 and I'm trying to do the following. Let's consider that I have a model that's completely auto-generated from a table using the "LINQ to SQL Classes" template in VS2010. The template generates 3 files (two .cs files and one .layout file respectively), and the generated partial class is expected to be used as an MVC model. Let's also consider, a single DB column, that's mapped into the model, may look like this: [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { get { return this._Name; } set { if ( (this._Name != value) ) { // ... generated stuff goes here } } } The ASP.NET MVC engine also provides a beautiful declarative way to specify some additional stuff, like RequiredAttribute, DisplayNameAttribute and other nice attributes. But since the mapped model is a purely auto-genereated model, I've realized that I should not change the model manually, and specify the fields like: [Required] [DisplayName("Project name")] [StringLength(128)] [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { ... though this approach works perfectly... until I change the model in the DBML-designer removing the ASP.NET MVC attributes automatically. So, how do I specify ASP.NET MVC attributes for the DBML models and their fields safely? Thanks in advance, and Merry Christmas.

    Read the article

  • InsertOnSubmit - NullReferenceException

    - by Jackie Chou
    I have 2 Model AccountEntity [Table(Name = "Account")] public class AccountEntity { [Column(IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public int id { get; set; } [Column(CanBeNull = false, Name = "email")] public string email { get; set; } [Column(CanBeNull = false, Name = "pwd")] public string pwd { get; set; } [Column(CanBeNull = false, Name = "row_guid")] public Guid guid { get; set; } private EntitySet<DetailsEntity> details_id { get; set; } [Association(Storage = "details_id", OtherKey = "id", ThisKey = "id")] public ICollection<DetailsEntity> detailsCollection { get; set; } } DetailsEntity [Table(Name = "Details")] public class DetailsEntity { public DetailsEntity(AccountEntity a) { this.Account = a; } [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "int")] public int id { get; set; } private EntityRef<AccountEntity> _account = new EntityRef<AccountEntity>(); [Association(IsForeignKey = true, Storage = "_account", ThisKey = "id")] public AccountEntity Account { get; set; } } Main using (Database db = new Database()) { AccountEntity a = new AccountEntity(); a.email = "hahaha"; a.pwd = "13212312"; a.guid = Guid.NewGuid(); db.Account.InsertOnSubmit(a); db.SubmitChanges(); } that has relationhip AccountEntity <- DetailsEntity (1-n) when i'm trying to insert a record exception throws NullReferenceException cause: by EntitySet null please help me make it insert

    Read the article

  • including tk.h and tcl.h in c program

    - by user362075
    hi, i am working on an ubuntu system. My aim is to basically make an IDE in C language using GUI tools from TCL/TK. I installed tcl 8.4, tk8.4, tcl8.4-dev, tk8.4-dev and have the tk.h and tcl.h headers file in my system. But, when I am running a basic hello world program it's showing a hell lot of errors. include "tk.h" include "stdio.h" void hello() { puts("Hello C++/Tk!"); } int main(int, char *argv[]) { init(argv[0]); button(".b") -text("Say Hello") -command(hello); pack(".b") -padx(20) -pady(6); } Some of the errors are tkDecls.h:644: error: expected declaration specifiers before ‘EXTERN’ /usr/include/libio.h:488: error: expected ‘)’ before ‘*’ token In file included from tk.h:1559, from new1.c:1: tkDecls.h:1196: error: storage class specified for parameter ‘TkStubs’ tkDecls.h:1201: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘*’ token /usr/include/stdio.h:145: error: storage class specified for parameter ‘stdin’ tk.h:1273: error: declaration for parameter ‘Tk_PhotoHandle’ but no such parameter Can anyone please tell me how can I rectify these errors? Please help...

    Read the article

  • including tk.h and tcl.h in c program

    - by user362075
    hi, i am working on an ubuntu system. My aim is to basically make an IDE in C language using GUI tools from TCL/TK. I installed tcl 8.4, tk8.4, tcl8.4-dev, tk8.4-dev and have the tk.h and tcl.h headers file in my system. But, when I am running a basic hello world program it's showing a hell lot of errors. include "tk.h" include "stdio.h" void hello() { puts("Hello C++/Tk!"); } int main(int, char *argv[]) { init(argv[0]); button(".b") -text("Say Hello") -command(hello); pack(".b") -padx(20) -pady(6); } Some of the errors are tkDecls.h:644: error: expected declaration specifiers before ‘EXTERN’ /usr/include/libio.h:488: error: expected ‘)’ before ‘*’ token In file included from tk.h:1559, from new1.c:1: tkDecls.h:1196: error: storage class specified for parameter ‘TkStubs’ tkDecls.h:1201: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘*’ token /usr/include/stdio.h:145: error: storage class specified for parameter ‘stdin’ tk.h:1273: error: declaration for parameter ‘Tk_PhotoHandle’ but no such parameter Can anyone please tell me how can I rectify these errors? Please help...

    Read the article

  • dynamically horizontal scalable key value store

    - by Zubair
    Hi, Is there a key value store that will give me the following: Allow me to simply add and remove nodes and will redstribute the data automatically Allow me to remove nodes and still have 2 extra data nodes to provide redundancy Allow me to store text or images up to 1GB in size Can store small size data up to 100TB of data Fast (so will allow queries to be performed on top of it) Make all this transparent to the client Works on Ubuntu/FreeBSD or Mac Free or open source I basically want something I can use a "single", and not have to worry about having memcached, a db, and several storage components so yes, I do want a database "silver bullet" you could say. Thanks Zubair Answers so far: MogileFS on top of BackBlaze - As far as I can see this is just a filesystem, and after some research it only seems to be appropriate for large image files Tokyo Tyrant - Needs lightcloud. This doesn't auto scale as you add new nodes. I did look into this and it seems it is very fast for queries which fit onto a single node though Riak - This is one I am looking into myself, but I don't have any results yet Amazon S3 - Is anyone using this as their sole persistance layer in production? From what I have seen it seems to be used for storage of images as complex queries are too expensive @shaman suggested Cassandra - definitely one I am looking into So far it seems that there is no database or key value store that fulfills the criteria I mentioned, not even after offering a bounty of 100 points did the question get answered!

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >