Search Results

Search found 4425 results on 177 pages for 'knowledge pathways'.

Page 169/177 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Is it possible to specify a generic constraint for a type parameter to be convertible FROM another t

    - by fostandy
    Suppose I write a library with the following: public class Bar { /* ... */ } public class SomeWeirdClass<T> where T : ??? { public T BarMaker(Bar b) { // ... play with b T t = (T)b return (T) b; } } Later, I expect users to use my library by defining their own types which are convertible to Bar and using the SomeWeirdClass 'factory'. public class Foo { public static explicit operator Foo(Bar f) { return new Bar(); } } public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo> weird = new SomeWeirdClass<Foo>(); Foo f = weird.BarMaker(b); } } this will compile if i set where T : Foo but the problem is that I don't know about Foo at the library's compile time, and I actually want something more like where T : some class that can be instantiated, given a Bar Is this possible? From my limited knowledge it does not seem to be, but the ingenuity of the .NET framework and its users always surprises me... This may or not be related to the idea of static interface methods - at least, I can see the value in being able to specify the presence of factory methods to create objects (similar to the same way that you can already perform where T : new()) edit: Solution - thanks to Nick and bzIm - For other readers I'll provide a completed solution as I understand it: edit2: This solution requires Foo to expose a public default constructor. For an even stupider better solution that does not require this see the very bottom of this post. public class Bar {} public class SomeWeirdClass<T> where T : IConvertibleFromBar<T>, new() { public T BarMaker(Bar b) { T t = new T(); t.Convert(b); return t; } } public interface IConvertibleFromBar<T> { T Convert(Bar b); } public class Foo : IConvertibleFromBar<Foo> { public static explicit operator Foo(Bar f) { return null; } public Foo Convert(Bar b) { return (Foo) b; } } public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo> weird = new SomeWeirdClass<Foo>(); Foo f = weird.BarMaker(b); } } edit2: Solution 2: Create a type convertor factory to use: #region library defined code public class Bar {} public class SomeWeirdClass<T, TFactory> where TFactory : IConvertorFactory<Bar, T>, new() { private static TFactory convertor = new TFactory(); public T BarMaker(Bar b) { return convertor.Convert(b); } } public interface IConvertorFactory<TFrom, TTo> { TTo Convert(TFrom from); } #endregion #region user defined code public class BarToFooConvertor : IConvertorFactory<Bar, Foo> { public Foo Convert(Bar from) { return (Foo) from; } } public class Foo { public Foo(int a) {} public static explicit operator Foo(Bar f) { return null; } public Foo Convert(Bar b) { return (Foo) b; } } #endregion public class Demo { public static void demo() { Bar b = new Bar(); SomeWeirdClass<Foo, BarToFooConvertor> weird = new SomeWeirdClass<Foo, BarToFooConvertor>(); Foo f = weird.BarMaker(b); } }

    Read the article

  • On REST: WADL or not IDL, is the following approach right ?

    - by redben
    This question is a bit long, please bear with me. In REST, i think we should not need WADL or any IDL. But rather something that would implicitly cover its concept. The way I think about it is when we (humans) surf the Web, when we go to a web site for the first time, we don't know what services it provides. You discover those on the html home page (or a sitemap page in a help section) or maybe just the main menu on the home page. If you make an analogy, the homepage or site map to us humans is what WSDL is to WS-* or what WADL could be to a REST service. Only that its just like any other html content. I think that in REST the following is a good way to do things, respecting the HATEOS paradigm. Have a top level (or default) resource that lists links to your other resources. For a library example, say RestLibrary.com/ it could be something like: <root xmlns:lib="http://librarystandards.com/libraryml"> <resource class="lib:book"> <link type="application/vnd.libraryml+xml" template="mylib.com/book/{isbn}" /> <link type="application/vnd.libraryml+xml" rel="add" href="mylib.com/book" method="POST" /> <link type="application/vnd.libraryml+xml" rel="update" template="mylib.com/book/{isbn}" method="PUT" /> </resource> <resource class="lib:bookList"> <link template="mylib.com/book?keywords={keywords}" type="application/vnd.openlibrary+xml" rel="search" /> </resource> </root> Note that it is assumed that the media type "application/vnd.libraryml+xml" is a defined standard or (may be just proprietary vocabulary) named libraryml. Also, the client should be able to understand this "homepage" resource (elements root, resource and link). This is the part that could be used instead of WADL : an Abstract vocabulary that should be understandable by any client. You could use an existing standard like Atom for example. But the main idea is to have an abstract vocabulary understandable by any client. Why not WADL then ? well wadl is only for service discovery. The idea here is to have an light abstract vocabulary that would serve as a base for hypermedia. A "root" vocabulary. Like in owl we have owl:thing...etc Now if the client knows the "libraryml" standard it can follow the links to the things it understands (after parsing the media type properties and xmlns). If not, it just won't. When i can't understand how to deal with something in REST architecture i tend to see how we Humans do it in the Web. In the Web, we have the Generic language that is HTML that enables site builders to deliver any specific content, regardless of its meaning to the client (the user), Browsers understand HTML but not the "meaning" of its content. It is the user that understands the (domain specific) content. If i go to say QuantumPhysics.org, my browser can render the home page (it is just html after all) and i can read the home page. If i understand quantum then fine i can continue browsing. If i don't i just get out (unless i want to learn the hardway :) ) In the RetsLibrary.com example the client app is just like me+my browser on QuantumPhysics.org. the media type "application/vnd.libraryml+xml" is quantum physics (knowledge). http is http in both examples. Now HTML of QuantumPhysics.org is in RestLibrary.com is XML + that tiny little abstract vocabulary (root resource and link, that you could replace with something like Atom). So does this approach have any value ? don't we need a root tiny hyper-vocabulary so we can succeed with hypermedia and the "initial URI" concept ? edit Yeah why not RDF as the root vocabulary !

    Read the article

  • Setting Session Variable from UpdatePanel

    - by Gavin
    I am using ASP.NET 2.0 AJAX Extensions 1.0 with the version v1.0.20229 of the AJAX Control Toolkit (which to my knowledge is the latest for .NET 2.0/Visual Studio 2005). My web page (aspx) has a DropDownList control on an UpdatePanel. In the handler for the DropDownList's SelectedIndexChanged event I attempt to set a session variable. The first time the event is fired, I get a Sys.WebForms.PageRequestManagerParserErrorException: "The message received from the server could not be parsed". If I continue, subsequent SelectedIndexChanged's are handled successfully. I have stumbled upon a solution whereby if I initialise the session variable in my Page_Load (so the event handler is just setting the value of a session variable that already exists as opposed to creating a new one) the problem goes away. I'm happy to do this, but I'm curious as to exactly what the underlying cause is. Can anyone explain? (My suspicion is that setting the session variable receives a response from the server which is then returned to the 'caller', but it's not the sort of response it knows how to deal with causing the exception?) EDIT: I reproduced the problem in a seperate little project: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="SessionTest._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" /> <div> <asp:UpdatePanel id="upCategorySelector" runat="server"> <ContentTemplate> Category: <asp:DropDownList ID="ddlCategory" runat="server" AutoPostBack="true" OnSelectedIndexChanged="ddlCategories_SelectedIndexChanged"> <asp:ListItem>Item 1</asp:ListItem> <asp:ListItem>Item 2</asp:ListItem> <asp:ListItem>Item 3</asp:ListItem> </asp:DropDownList> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> </html> Default.aspx.cs using System; using System.Data; using System.Configuration; using System.Collections; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; namespace SessionTest { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { // If I do this, the exception does not occur. if (Session["key"] == null) Session.Add("key", 0); } protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { // If Session["key"] has not been created, setting it from // the async call causes the excaption Session.Add("key", ((DropDownList)sender).SelectedValue); } } }

    Read the article

  • howto only tunnel specific hosts route through openvpn client on tomato

    - by kcome
    I am relatively newbie in networking world although I did coding and know some sysadmin background for a long time. and here I'm only one step from my destination. The whole picture is : at home I use one LinkSys E3000 as the gateway(don't know yet if this is it's name), wireless AP and no other routing/switching devices. It serves 1 PC and 1 Mac with LAN, 1 Mac Mini + 1 iPad + 2 smartphones with WIFI. My goal is use an openvpn client on the E3000 (with tomato firmware) and make my iPad and smartphone's all WiFi traffic through it, and other devices route remain the same non-openvpn route. So far I'm able to connect openvpn client on E3000 to an openvpn server, tunnel all my devices' all traffic through that openvpn connection. What's left is howto selectively route by source IP (at least in my guessing) to the tunnel while don't bother others. I had learned some 'iptables' and 'route' in past few days however without much luck, so here comes my question. Here are some info which will help you get the structure. ifconfig -a output, some useless lines striped, and in the web interface C0:C1:C0:1A:E0:28 is WAN, C0:C1:C0:1A:E0:27 is LAN, C0:C1:C0:1A:E0:29 is 2.4G wifi AP, C0:C1:C0:1A:E0:2A is 5G wifi AP. root@router:/tmp/home/root# ifconfig -a br0 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:29 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 eth2 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:2A UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host ppp0 Link encap:Point-to-Point Protocol inet addr:172.200.1.43 P-t-P:172.200.0.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING MULTICAST MTU:1480 Metric:1 vlan1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:27 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 vlan2 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:28 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 wl0.1 Link encap:Ethernet HWaddr C0:C1:C0:1A:E0:29 BROADCAST MULTICAST MTU:1500 Metric:1 brctl show output root@router:/tmp/home/root# brctl show bridge name bridge id STP enabled interfaces br0 8000.c0c1c01ae027 no vlan1 eth1 eth2 before openvpn route-up script root@router:/tmp/home/root# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.200.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 172.200.0.1 0.0.0.0 UG 0 0 0 ppp0 openvpn server push PUSH: Received control message: 'PUSH_REPLY,redirect-gateway,dhcp-option DNS 8.8.8.8,route 172.20.0.1,topology net30,ping 10,ping-restart 120,ifconfig 172.20.0.6 172.20.0.5' openvpn's stock route-up script Apr 24 14:52:06 router daemon.notice openvpn[1768]: /sbin/ifconfig tun11 172.20.0.6 pointopoint 172.20.0.5 mtu 1500 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 72.14.177.29 netmask 255.255.255.255 gw 172.200.0.1 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 172.20.0.5 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 172.20.0.5 Apr 24 14:52:08 router daemon.notice openvpn[1768]: /sbin/route add -net 172.20.0.1 netmask 255.255.255.255 gw 172.20.0.5 route after openvpn root@router:/tmp/home/root# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.20.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun11 72.14.177.29 172.200.0.1 255.255.255.255 UGH 0 0 0 ppp0 172.200.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 172.20.0.1 172.20.0.5 255.255.255.255 UGH 0 0 0 tun11 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 172.20.0.5 128.0.0.0 UG 0 0 0 tun11 128.0.0.0 172.20.0.5 128.0.0.0 UG 0 0 0 tun11 0.0.0.0 172.200.0.1 0.0.0.0 UG 0 0 0 ppp0 something I had noticed and tried: * on the web interface of openvpn client there is an option "Create NAT on tunnel", if i check this, there is the following script (probably executed after openvpn connection established) root@router:/tmp/home/root# cat /tmp/etc/openvpn/fw/client1-fw.sh #!/bin/sh iptables -I INPUT -i tun11 -j ACCEPT iptables -I FORWARD -i tun11 -j ACCEPT iptables -t nat -I POSTROUTING -s 192.168.1.0/255.255.255.0 -o tun11 -j MASQUERADE if i uncheck this option, the last line will not appear. Then I guess probably the my issue will be solved by iptables and NAT related commands, I just haven't got enough knowledge to figure them out. I tried run iptables -t nat -I POSTROUTING -s 192.168.1.6 -o tun11 -j MASQUERADE manually after openvpn connected (192.168.1.6 is the ip address of my iPad), then my iPad get internet with openvpn tunnel, however all other devices can't reach internet. in case if needed, here is the iptables about NAT root@router:/tmp/home/root# iptables -t nat -L -n Chain PREROUTING (policy ACCEPT) target prot opt source destination DROP all -- 0.0.0.0/0 192.168.1.0/24 WANPREROUTING all -- 0.0.0.0/0 172.200.1.43 upnp all -- 0.0.0.0/0 172.200.1.43 Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 SNAT all -- 192.168.1.0/24 192.168.1.0/24 to:192.168.1.1 Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain WANPREROUTING (1 references) target prot opt source destination DNAT icmp -- 0.0.0.0/0 0.0.0.0/0 to:192.168.1.1 Chain upnp (1 references) target prot opt source destination DNAT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5353 to:192.168.1.3:5353 Thanks in advance for helping and read this so much, I hope i made every info you need to give a help :)

    Read the article

  • Poor upload/download speed on 2 x ADSL lines into a Cisco 2621XM

    - by 2020mobile
    Hi, Sorry never been on this site before so I apologise if not the right section or even forum. I have users complaining of very slow internetn connectivity on site and have checked with our ISP who have said that the line is testing at 8mb. We have 2 x BT lines that have our ISP broadand on them. Both lines go into a Cisco 2600 series router that then has a PIX firewall off that. Connectivity is successful just gone really slow and unable to download anything. Config is below: version 12.3 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname ROUTER-ADSL-INTERNET ! logging buffered 16384 informational enable secret xxx enable password xxx ! username xxx username xxx clock summer-time UK recurring last Sun Mar 1:00 last Sun Oct 1:00 aaa new-model ! ! aaa authentication login default local aaa authorization exec default local aaa session-id common ip subnet-zero no ip source-route ! ! ! ip audit notify log ip audit po max-events 100 no ip bootp server ip name-server 213.208.106.212 no mpls ldp logging neighbor-changes no ftp-server write-enable ! ! ! ! ! ! ! ! ! ! no voice hpi capture buffer no voice hpi capture destination ! ! ! ! ! ! ! ! interface ATM0/0 description 01270 111111 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/0 ip address 82.133.32.9 255.255.255.248 shutdown speed 100 full-duplex no cdp enable ! interface ATM0/1 description 01270 222222 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/1 ip address 217.146.115.49 255.255.255.240 duplex auto speed auto no cdp enable ! interface Dialer0 ip address 217.146.115.250 255.255.255.248 encapsulation ppp dialer pool 1 dialer-group 1 ppp authentication chap callin ppp chap hostname [email protected] ppp chap password 7 xxxxx ppp multilink ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! no logging trap access-list 10 permit 217.146.115.50 access-list 10 permit 82.133.32.10 access-list 10 deny any access-list 22 permit 217.146.115.50 access-list 22 permit 217.206.239.86 access-list 22 permit 82.133.32.10 access-list 22 deny any dialer-list 1 protocol ip permit no cdp run ! ! snmp-server community xxxxxx RO 10 snmp-server enable traps tty radius-server authorization permit missing Service-Type ! ! ! ! ! ! line con 0 exec-timeout 5 0 password 7 xxxxxx line aux 0 no exec line vty 0 4 access-class 22 in exec-timeout 5 0 password 7 xxxxxx transport input telnet ssh transport output none line vty 5 15 password 7 xxxxxx transport input telnet ssh ! ntp clock-period 17180095 ntp server 130.88.200.98 ! ! end Now my knowledge is very limited but ISP have said that while the lines are bonded each needs a seperate login as they've recently changed their L2TP router and that enforces the use of seperate logins - when the lines were configured we were given two logins. So, my question is what changes do I need to make to the config in order to get this working? it was ok before their change and I do have another login :- 01270 111111 - [email protected] 01270 222222 - [email protected] Apologies for the long email and thanks for taking the time to read it. Any more info I can provide please let me know. Thanks,

    Read the article

  • How do you automap List<float> or float[] with Fluent NHibernate?

    - by Tom Bushell
    Having successfully gotten a sample program working, I'm now starting to do Real Work with Fluent NHibernate - trying to use Automapping on my project's class heirarchy. It's a scientific instrumentation application, and the classes I'm mapping have several properties that are arrays of floats e.g. private float[] _rawY; public virtual float[] RawY { get { return _rawY; } set { _rawY = value; } } These arrays can contain a maximum of 500 values. I didn't expect Automapping to work on arrays, but tried it anyway, with some success at first. Each array was auto mapped to a BLOB (using SQLite), which seemed like a viable solution. The first problem came when I tried to call SaveOrUpdate on the objects containing the arrays - I got "No persister for float[]" exceptions. So my next thought was to convert all my arrays into ILists e.g. public virtual IList<float> RawY { get; set; } But now I get: NHibernate.MappingException: Association references unmapped class: System.Single Since Automapping can deal with lists of complex objects, it never occured to me it would not be able to map lists of basic types. But after doing some Googling for a solution, this seems to be the case. Some people seem to have solved the problem, but the sample code I saw requires more knowledge of NHibernate than I have right now - I didn't understand it. Questions: 1. How can I make this work with Automapping? 2. Also, is it better to use arrays or lists for this application? I can modify my app to use either if necessary (though I prefer lists). Edit: I've studied the code in Mapping Collection of Strings, and I see there is test code in the source that sets up an IList of strings, e.g. public virtual IList<string> ListOfSimpleChildren { get; set; } [Test] public void CanSetAsElement() { new MappingTester<OneToManyTarget>() .ForMapping(m => m.HasMany(x => x.ListOfSimpleChildren).Element("columnName")) .Element("class/bag/element").Exists(); } so this must be possible using pure Automapping, but I've had zero luck getting anything to work, probably because I don't have the requisite knowlege of manually mapping with NHibernate. Starting to think I'm going to have to hack this (by encoding the array of floats as a single string, or creating a class that contains a single float which I then aggregate into my lists), unless someone can tell me how to do it properly. End Edit Here's my CreateSessionFactory method, if that helps formulate a reply... private static ISessionFactory CreateSessionFactory() { ISessionFactory sessionFactory = null; const string autoMapExportDir = "AutoMapExport"; if( !Directory.Exists(autoMapExportDir) ) Directory.CreateDirectory(autoMapExportDir); try { var autoPersistenceModel = AutoMap.AssemblyOf<DlsAppOverlordExportRunData>() .Where(t => t.Namespace == "DlsAppAutomapped") .Conventions.Add( DefaultCascade.All() ) ; sessionFactory = Fluently.Configure() .Database(SQLiteConfiguration.Standard .UsingFile(DbFile) .ShowSql() ) .Mappings(m => m.AutoMappings.Add(autoPersistenceModel) .ExportTo(autoMapExportDir) ) .ExposeConfiguration(BuildSchema) .BuildSessionFactory() ; } catch (Exception e) { Debug.WriteLine(e); } return sessionFactory; }

    Read the article

  • Modern Java alternatives

    - by Ralph
    I'm not sure if stackoverflow is the best forum for this discussion. I have been a Java developer for 14 years and have written an enterprise-level (~500,000 line) Swing application that uses most of the standard library APIs. Recently, I have become disappointed with the progress that the language has made to "modernize" itself, and am looking for an alternative for ongoing development. I have considered moving to the .NET platform, but I have issues with using something the only runs well in Windows (I know about Mono, but that is still far behind Microsoft). I also plan on buying a new Macbook Pro as soon as Apple releases their new rumored Arrandale-based machines and want to develop in an environment that will feel "at home" in Unix/Linux. I have considered using Python or Ruby, but the standard Java library is arguably the largest of any modern language. In JVM-based languages, I looked at Groovy, but am disappointed with its performance. Rumor has it that with the soon-to-be released JDK7, with its InvokeDynamic instruction, this will improve, but I don't know how much. Groovy is also not truly a functional language, although it provides closures and some of the "functional" features on collections. It does not embrace immutability. I have narrowed my search down to two JVM-based alternatives: Scala and Clojure. Each has its strengths and weaknesses. I am looking for the stackoverflow readerships' opinions. I am not an expert at either of these languages; I have read 2 1/2 books on Scala and am currently reading Stu Halloway's book on Clojure. Scala is strongly statically typed. I know the dynamic language folks claim that static typing is a crutch for not doing unit testing, but it does provide a mechanism for compile-time location of a whole class of errors. Scala is more concise than Java, but not as much as Clojure. Scala's inter-operation with Java seems to be better than Clojure's, in that most Java operations are easier to do in Scala than in Clojure. For example, I can find no way in Clojure to create a non-static initialization block in a class derived from a Java superclass. For example, I like the Apache commons CLI library for command line argument parsing. In Java and Scala, I can create a new Options object and add Option items to it in an initialization block as follows (Java code): final Options options = new Options() { { addOption(new Option("?", "help", false, "Show this usage information"); // other options } }; I can't figure out how to the same thing in Clojure (except by using (doit...)), although that may reflect my lack of knowledge of the language. Clojure's collections are optimized for immutability. They rarely require copy-on-write semantics. I don't know if Scala's immutable collections are implemented using similar algorithms, but Rich Hickey (Clojure's inventor) goes out of his way to explain how that language's data structures are efficient. Clojure was designed from the beginning for concurrency (as was Scala) and with modern multi-core processors, concurrency takes on more importance, but I occasionally need to write simple non-concurrent utilities, and Scala code probably runs a little faster for these applications since it discourages, but does not prohibit, "simple" mutability. One could argue that one-off utilities do not have to be super-fast, but sometimes they do tasks that take hours or days to complete. I know that there is no right answer to this "question", but I thought I would open it up for discussion. If anyone has a suggestion for another JVM-based language that can be used for enterprise level development, please list it. Also, it is not my intent to start a flame war. Thanks, Ralph

    Read the article

  • Error when pushing to Heroku - StatementInvalid - Ruby on Rails

    - by bgadoci
    I am trying to deploy my first rails app to Heroku and seem to be having a problem. After git push heroku master I get an error saying that relation "tags does not exist. I understand that without knowledge of my application it will be hard to help but I am wondering if someone can point me in the right direction. I have checked the schema.rb file and also been over all my migrations and there doesn't seem to be a problem there. The error message lead me to believe that I left something out of my routes.rb file but can't seem to find anything there either. Perhaps just some help deciphering this message. Processing PostsController#index (for 99.7.50.140 at 2010-04-21 12:28:59) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: relation "tags" does not exist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"tags"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): app/controllers/posts_controller.rb:9:in `index' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.0.1) lib/thin/connection.rb:80:in `pre_process' thin (1.0.1) lib/thin/connection.rb:78:in `catch' thin (1.0.1) lib/thin/connection.rb:78:in `pre_process' thin (1.0.1) lib/thin/connection.rb:57:in `process' thin (1.0.1) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run_machine' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run' thin (1.0.1) lib/thin/backends/base.rb:57:in `start' thin (1.0.1) lib/thin/server.rb:150:in `start' thin (1.0.1) lib/thin/controllers/controller.rb:80:in `start' thin (1.0.1) lib/thin/runner.rb:173:in `send' thin (1.0.1) lib/thin/runner.rb:173:in `run_command' thin (1.0.1) lib/thin/runner.rb:139:in `run!' thin (1.0.1) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 Also, here is my routes.rb file if that helps at all. ActionController::Routing::Routes.draw do |map| map.resources :ugtags map.resources :wysihat_files map.resources :users map.resources :votes map.resources :votes, :belongs_to => :user map.resources :tags, :belongs_to => :user map.resources :ugtags, :belongs_to => :user map.resources :posts, :collection => {:auto_complete_for_tag_tag_name => :get } map.resources :posts, :sessions map.resources :posts, :has_many => :comments map.resources :posts, :has_many => :tags map.resources :posts, :has_many => :ugtags map.resources :posts, :has_many => :votes map.resources :posts, :belongs_to => :user map.resources :tags, :collection => {:auto_complete_for_tag_tag_name => :get } map.resources :ugtags, :collection => {:auto_complete_for_ugtag_ugctag_name => :get } map.login 'login', :controller => 'sessions', :action => 'new' map.logout 'logout', :controller => 'sessions', :action => 'destroy' map.root :controller => "posts" map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end

    Read the article

  • iPhone SDK / Objective C Syntax Question

    - by Koppo
    To all, I was looking at the sample project from http://iphoneonrails.com/ and I saw they took the NSObject class and added methods to it for the SOAP calls. Their sample project can be downloaded from here http://iphoneonrails.com/downloads/objective_resource-1.01.zip. My question is related to my lack of knowledge on the following syntax as I haven't seen it yet in a iPhone project. There is a header file called NSObject+ObjectiveResource.h where they declare and change NSObject to have extra methods for this project. Is the "+ObjectiveResource.h" in the name there a special syntax or is that just the naming convention of the developers. Finally inside the class NSObject we have the following #import <Foundation/Foundation.h> @interface NSObject (ObjectiveResource) // Response Formats typedef enum { XmlResponse = 0, JSONResponse, } ORSResponseFormat; // Resource configuration + (NSString *)getRemoteSite; + (void)setRemoteSite:(NSString*)siteURL; + (NSString *)getRemoteUser; + (void)setRemoteUser:(NSString *)user; + (NSString *)getRemotePassword; + (void)setRemotePassword:(NSString *)password; + (SEL)getRemoteParseDataMethod; + (void)setRemoteParseDataMethod:(SEL)parseMethod; + (SEL) getRemoteSerializeMethod; + (void) setRemoteSerializeMethod:(SEL)serializeMethod; + (NSString *)getRemoteProtocolExtension; + (void)setRemoteProtocolExtension:(NSString *)protocolExtension; + (void)setRemoteResponseType:(ORSResponseFormat) format; + (ORSResponseFormat)getRemoteResponseType; // Finders + (NSArray *)findAllRemote; + (NSArray *)findAllRemoteWithResponse:(NSError **)aError; + (id)findRemote:(NSString *)elementId; + (id)findRemote:(NSString *)elementId withResponse:(NSError **)aError; // URL construction accessors + (NSString *)getRemoteElementName; + (NSString *)getRemoteCollectionName; + (NSString *)getRemoteElementPath:(NSString *)elementId; + (NSString *)getRemoteCollectionPath; + (NSString *)getRemoteCollectionPathWithParameters:(NSDictionary *)parameters; + (NSString *)populateRemotePath:(NSString *)path withParameters:(NSDictionary *)parameters; // Instance-specific methods - (id)getRemoteId; - (void)setRemoteId:(id)orsId; - (NSString *)getRemoteClassIdName; - (BOOL)createRemote; - (BOOL)createRemoteWithResponse:(NSError **)aError; - (BOOL)createRemoteWithParameters:(NSDictionary *)parameters; - (BOOL)createRemoteWithParameters:(NSDictionary *)parameters andResponse:(NSError **)aError; - (BOOL)destroyRemote; - (BOOL)destroyRemoteWithResponse:(NSError **)aError; - (BOOL)updateRemote; - (BOOL)updateRemoteWithResponse:(NSError **)aError; - (BOOL)saveRemote; - (BOOL)saveRemoteWithResponse:(NSError **)aError; - (BOOL)createRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; - (BOOL)updateRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; - (BOOL)destroyRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; // Instance helpers for getting at commonly used class-level values - (NSString *)getRemoteCollectionPath; - (NSString *)convertToRemoteExpectedType; //Equality test for remote enabled objects based on class name and remote id - (BOOL)isEqualToRemote:(id)anObject; - (NSUInteger)hashForRemote; @end What is the "ObjectiveResource" in the () mean for NSObject? What is that telling Xcode and the compiler about what is happening..? After that things look normal to me as they have various static and instance methods. I know that by doing this all user classes that inherit from NSObject now have all the extra methods for this project. My question is what is the parenthesis are doing after the NSObject. Is that referencing a header file, or is that letting the compiler know that this class is being over ridden. Thanks again and my apologies ahead of time if this is a dumb question but just trying to learn what I lack.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • How to save a large nhibernate collection without causing OutOfMemoryException

    - by Michael Hedgpeth
    How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process? I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video? For your reference, here is the code from the simple sample I created: public class Video { public long Id { get; set; } public string Name { get; set; } public Video() { Screenshots = new ArrayList(); } public IList Screenshots { get; set; } } public class Screenshot { public long Id { get; set; } public byte[] Data { get; set; } } And mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" default-access="property"> <class name="Screenshot" lazy="false"> <id name="Id" type="Int64"> <generator class="hilo"/> </id> <property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" /> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" > <class name="Video" lazy="false" table="Video" discriminator-value="0" abstract="true"> <id name="Id" type="Int64" access="property"> <generator class="hilo"/> </id> <property name="Name" /> <list name="Screenshots" cascade="all-delete-orphan" lazy="false"> <key column="VideoId" /> <index column="SortOrder" /> <one-to-many class="Screenshot" /> </list> </class> </hibernate-mapping> When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using: using (var session = CreateSession()) { Video video = new Video(); for (int i = 0; i < 10000; i++) { video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)}); } session.SaveOrUpdate(video); }

    Read the article

  • WCF: Configuring Known Types

    - by jerbersoft
    I want to know as to how to configure known types in WCF. For example, I have a Person class and an Employee class. The Employee class is a sublass of the Person class. Both class are marked with a [DataContract] attribute. I dont want to hardcode the known type of a class, like putting a [ServiceKnownType(typeof(Employee))] at the Person class so that WCF will know that Employee is a subclass of Person. Now, I added to the host's App.config the following XML configuration: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.runtime.serialization> <dataContractSerializer> <declaredTypes> <add type="Person, WCFWithNoLibrary, Version=1.0.0.0,Culture=neutral,PublicKeyToken=null"> <knownType type="Employee, WCFWithNoLibrary, Version=1.0.0.0,Culture=neutral, PublicKeyToken=null" /> </add> </declaredTypes> </dataContractSerializer> </system.runtime.serialization> <system.serviceModel> ....... </system.serviceModel> </configuration> I compiled it, run the host, added a service reference at the client and added some code and run the client. But an error occured: The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://www.herbertsabanal.net:person. The InnerException message was 'Error in line 1 position 247. Element 'http://www.herbertsabanal.net:person' contains data of the 'http://www.herbertsabanal.net/Data:Employee' data contract. The deserializer has no knowledge of any type that maps to this contract. Add the type corresponding to 'Employee' to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding it to the list of known types passed to DataContractSerializer.'. Please see InnerException for more details. Below are the data contracts: [DataContract(Namespace="http://www.herbertsabanal.net/Data", Name="Person")] class Person { string _name; int _age; [DataMember(Name="Name", Order=0)] public string Name { get { return _name; } set { _name = value; } } [DataMember(Name="Age", Order=1)] public int Age { get { return _age; } set { _age = value; } } } [DataContract(Namespace="http://www.herbertsabanal.net/Data", Name="Employee")] class Employee : Person { string _id; [DataMember] public string ID { get { return _id; } set { _id = value; } } } Btw, I didn't use class libraries (WCF class libraries or non-WCF class libraries) for my service. I just plain coded it in the host project. I guess there must be a problem at the config file (please see config file above). Or I must be missing something. Any help would be pretty much appreciated.

    Read the article

  • General website publishing questions involving domain forwarding issue

    - by Gorgeousyousuf
    Even though I have been having a certain level of knowledge and experience about web development I have never interested in obtaining a domain and publishing a website from my own server. Since today I have been struggling with getting my own domain and configuring it utilizing web sources. I started with learning the outline of web publishing process including web server installation, deploying a website for testing purpose,router port forwarding, getting a domain and forwarding domain to my router which will also forward http requests to my web server I am confused about some parts and so far could not get the web site accessed from outside of the network. All I try to do is just for learning purpose so I do not pay much attention to security issues for now. I have Server 2008 and IIS 7.5 installed. I use a laptop and have access to the modem over wireless and my modem is Zoom x6 5590. Well I will continue explaining what I have done so far and what I think will be after each action I did, I have successfully had access to my website on any local computer entering the internal ip address and port pair of the host machine in a browser. Next, I forwarded port 80 of my host machine creating a virtual server like 10.0.0.x(internal ip(static) of the host) - tcp - start port : 80 - end port : 80 in router options. Now I suppose every request that will come to the public Ip on port 80 will be forwarded to my host machine(10.0.0.x) over port 80. So If everyhing went as desired, the website listening on port 80 will accept the request and process the issue and finally respond bla bla bla... I suppose to access my website from outside of the network by entering http://MyPublicIp:80 in a browser but I couldn't accomplish this task by now despite using godady's domain forwarding tool,I see a small view of my website when I click the "preview" button that checks whether the address(http://publicip/Index.aspx) I entered where my domain will be forwarded is available or not. I am sure that configuring domain does not play a role in solving such a problem since using public ip and port matching does not help. So here is the first question, What is the fact that I face this problem? After that, I have couple of question regarding domain forwarding using godaddy tool. Can I forward my domain to a any port for example port 8080 other than default http port 80? Additionally, can I use a sub-domain to forward to a different port of the host? What I want to design is if the client enters www.mydomain.com, website1 will respond over a specified port and after when a client enters info.mydomain.com, another website which listens on different port will respond. I tried to add a sub-domain and forward it to a address like http://www.mydomain.com:8080/Index.aspx with no success. Can I really do that? Finally, what if I have a ftp site listening on the default port 21 and I create a domain like ftp.mydomain.com that will forward to that ftp site address. Is it possible to use sub-domains for ftp site access? I know I am more than confused but no matter whatever and however you reply to me, you will help me have a more clear view on this subject. Thank you very much from now.

    Read the article

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • How do I evaluate my skillset against the current market to see what needs improvement and where my

    - by baijajusav
    First of all, this question may be out of bounds for this site. If so, remove it. I say this because this site seems to be a place for more concrete questions that are not so relative in nature. And before I begin, for those of you whom just prefer a question and not this sort of dialog, here is my question: How can I assess my current skills as a programmer and decide where and what areas to improve upon? That said, here's what I'm asking/talking about, in essence. The market is always in constant flux. As programmers we're always having to learn new things, update our skills, push ourselves into that next project. There's not a very good litmus test that I know of for us to get an idea of where we stand as programmers. I came across this blog post by Jeff Atwood talking about why can't programmers code. Instinctively (and as the post goes on to state) I rushed through the program in about 4 minutes (most of that time was b/c I was hand writing it out. Still, this doesn't really answer the question of where do my skills need to be to succeed in today's world. I real blogs, listen to podcasts, try to keep up on the latest things coming out. It has only been in the past couple of months that I made a decision to pick a focus area for my learning as I can't learn everything and trying to do so is to spread myself too thin. I chose ASP.NET MVC & C#. I plan to stick with Microsoft technologies, not out of some sense of loyalty or stubbornness, but rather because they seem to stream together and have a unifying connection between them. With Windows Phone 7 coming out, it seems that now is the obvious time to pick up WPF and Silverlight as well. Still, if you asked me to code something apart from intellisense and the internet, I probably couldn't get the syntax right. I don't have libraries memorized or know precisely where the classes I use exist within the .Net framework, namely because I haven't had to pull that knowledge out of the air. In a way, I suppose Visual Studio has insulated me, which isn't a good thing, but, at the same time, I've still been able to be productive. I'm working on my own side project to try and help my learning. In doing so, I'm trying to make use of best practices and 3rd party frameworks where I can. I'm using automapper and EF 1.0. I know everyone in the .net community seems to cry foul at the sound of EF 1.0, but I can't say why because I've never used it. There's no lazy loading and that has proven rather annoying; however, aside from that, I haven't had that much of an issue. Granted this is probably because I'm not writing tests as I go (which I'm not doing because I don't know how to test EF in tests and don't really have a clue how to write tests for ASP.NET MVC 1.0). I'm also using a custom membership provider; granted, it's a barebone implementation, but I'm using it still. My thinking in all of this is, while I am neglecting a great many important technologies that are in the mainstream, I'll have a working project in the end. I can come back and add those things after I finish. Doing it all now and at once seems like too much. I know how I work and I don't think I'd ever get it done that way. I've elected to make this a community wiki as I think this question might fight better there. If a moderator disagrees with that choice or the decision to post this here, the just delete the question. I'm not trying to make undue work for anyone. I'm just a programmer trying to assess my where his skills are now and where I should be improving.

    Read the article

  • In Ruby, how to I read memory values from an external process?

    - by grg-n-sox
    So all I simply want to do is make a Ruby program that reads some values from known memory address in another process's virtual memory. Through my research and basic knowledge of hex editing a running process's x86 assembly in memory, I have found the base address and offsets for the values in memory I want. I do not want to change them; I just want to read them. I asked a developer of a memory editor how to approach this abstract of language and assuming a Windows platform. He told me the Win32API calls for OpenProcess, CreateProcess, ReadProcessMemory, and WriteProcessMemory were the way to go using either C or C++. I think that the way to go would be just using the Win32API class and mapping two instances of it; One for either OpenProcess or CreateProcess, depending on if the user already has th process running or not, and another instance will be mapped to ReadProcessMemory. I probably still need to find the function for getting the list of running processes so I know which running process is the one I want if it is running already. This would take some work to put all together, but I am figuring it wouldn't be too bad to code up. It is just a new area of programming for me since I have never worked this low level from a high level language (well, higher level than C anyways). I am just wondering of the ways to approach this. I could just use a bunch or Win32API calls, but that means having to deal with a bunch of string and array pack and unpacking that is system dependant I want to eventually make this work cross-platform since the process I am reading from is produced from an executable that has multiple platform builds, (I know the memory address changes from system to system. The idea is to have a flat file that contains all memory mappings so the Ruby program can just match the current platform environment to the matching memory mapping.) but from the looks of things I'll just have to make a class that wraps whatever is the current platform's system shared library memory related function calls. For all I know, there could already exist a Ruby gem that takes care of all of this for me that I am just not finding. I could also possibly try editing the executables for each build to make it so whenever the memory values I want to read from are written to by the process, it also writes a copy of the new value to a space in shared memory that I somehow have Ruby make an instance of a class that is a pointer under the hood to that shared memory address and somehow signal to the Ruby program that the value was updated and should be reloaded. Basically a interrupt based system would be nice, but since the purpose of reading these values is just to send to a scoreboard broadcasted from a central server, I could just stick to a polling based system that sends updates at fixed time intervals. I also could just abandon Ruby altogether and go for C or C++ but I do not know those nearly as well. I actually know more x86 than C++ and I only know C as far as system independent ANSI C and have never dealt with shared system libraries before. So is there a gem or lesser known module available that has already done this? If not, then any additional information as to how to accomplish this would be nice. I guess, long story short, how do I do all this? Thanks in advance, Grg PS: Also a confirmation that those Win32API calls should be aimed at the kernel32.dll library would be nice.

    Read the article

  • Why do we need different CPU architecture for server & mini/mainframe & mixed-core?

    - by claws
    Hello, I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia. It categorizes notable CPU architectures into following categories. Embedded CPU architectures Microcomputer CPU architectures Workstation/Server CPU architectures Mini/Mainframe CPU architectures Mixed core CPU architectures I was analyzing the purposes and have few doubts. I taking Microcomputer CPU (PC) architecture as reference and comparing others. Embedded CPU architecture: They are a completely new world. Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC. The above point also clarifies why do we need a separate operating system (RTOS). Workstation/Server CPU architectures I don't know what is a workstation. Someone clarify regarding the workstation. As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention. But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify? Mini/Mainframe CPU architectures Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge. Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it? Is there a new kind of operating system for this too? Why? Mixed core CPU architectures Never heard of these. If possible please keep your answer in this format: XYZ CPU architectures Purpose of XYZ Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores. Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

    Read the article

  • Eclipse (STS), Maven and maven-minify-plugin, can they work together?

    - by CodeReaper
    Hi, I am working on a project where I am in charge of html, css and javascript. I found this maven-minify-plugin that seemed to just what I wanted. Everything is good when I deploy using maven on the server, but when I am using Eclipse (STS, www.springsource.com/products/sts) to run the project on localhost no css nor js file is generated by the plugin. Does anyone have experience with this Maven plugin, so they can tell me if it should be possible or not run on localhost? Does anyone have knowledge of another plugin I can use to (combine and) minify javascript and css files when running on localhost in Eclipse and also when deploying using Maven? Any help appreciated... ----extra information---- I basicly just copied in what it said on the plugin webpage, so I have these bits in my pom.xml: .... <build> <plugins> .... <plugin> <groupId>com.samaxes.maven</groupId> <artifactId>maven-minify-plugin</artifactId> <version>1.1</version> <executions> <execution> <id>default-minify</id> <phase>process-resources</phase> <configuration> <cssFiles> .... <param>forms.css</param> <param>jquery.droppy.css</param> <param>jquery.jgrowl.css</param> </cssFiles> <jsFiles> .... <param>jquery.droppy.js</param> <param>jquery.jgrowl.js</param> </jsFiles> <jsFinalFile>script.js</jsFinalFile> <linebreak>-1</linebreak> <nomunge>false</nomunge> <verbose>false</verbose> <preserveAllSemiColons>false</preserveAllSemiColons> <disableOptimizations>false</disableOptimizations> <bufferSize>4096</bufferSize> </configuration> <goals> <goal>minify</goal> </goals> </execution> </executions> </plugin> </plugins> </build> .... Should/Can I bind the plugin to a difference phase? I just use mvn clean package and move the snapshot into tomcat to deploy on the server. I am unsure on how to explain how I run the webapp on localhost, but here goes. I have a vanilia tomcat, that I defined as a server in Eclipse and then defined that the webapp should always build in that "server".

    Read the article

  • being able to solve google code jam problem sets

    - by JPro
    This is not a homework question, but rather my intention to know if this is what it takes to learn programming. I keep loggin into TopCoder not to actually participate but to get the basic understand of how the problems are solved. But to my knowledge I don't understand what the problem is and how to translate the problem into an algorithm that can solve it. Just now I happen to look at ACM ICPC 2010 World Finals which is being held in china. The teams were given problem sets and one of them is this: Given at most 100 points on a plan with distinct x-coordinates, find the shortest cycle that passes through each point exactly once, goes from the leftmost point always to the right until it reaches the rightmost point, then goes always to the left until it gets back to the leftmost point. Additionally, two points are given such that the the path from left to right contains the first point, and the path from right to left contains the second point. This seems to be a very simple DP: after processing the last k points, and with the first path ending in point a and the second path ending in point b, what is the smallest total length to achieve that? This is O(n^2) states, transitions in O(n). We deal with the two special points by forcing the first path to contain the first one, and the second path contain the second one. Now I have no idea what I am supposed to solve after reading the problem set. and there's an other one from google code jam: Problem In a big, square room there are two point light sources: one is red and the other is green. There are also n circular pillars. Light travels in straight lines and is absorbed by walls and pillars. The pillars therefore cast shadows: they do not let light through. There are places in the room where no light reaches (black), where only one of the two light sources reaches (red or green), and places where both lights reach (yellow). Compute the total area of each of the four colors in the room. Do not include the area of the pillars. Input * One line containing the number of test cases, T. Each test case contains, in order: * One line containing the coordinates x, y of the red light source. * One line containing the coordinates x, y of the green light source. * One line containing the number of pillars n. * n lines describing the pillars. Each contains 3 numbers x, y, r. The pillar is a disk with the center (x, y) and radius r. The room is the square described by 0 = x, y = 100. Pillars, room walls and light sources are all disjoint, they do not overlap or touch. Output For each test case, output: Case #X: black area red area green area yellow area Is it required that people who program should be should be able to solve these type of problems? I would apprecite if anyone can help me interpret the google code jam problem set as I wish to participate in this years Code Jam to see if I can do anthing or not. Thanks.

    Read the article

  • NHibernate IUserType convert nullable DateTime to DB not-null value

    - by barakbbn
    I have legacy DB that store dates that means no-date as 9999-21-31, The column Till_Date is of type DateTime not-null="true". in the application i want to build persisted class that represent no-date as null, So i used nullable DateTime in C# //public DateTime? TillDate {get; set; } I created IUserType that knows to convert the entity null value to DB 9999-12-31 but it seems that NHibernate doesn't call SafeNullGet, SafeNullSet on my IUserType when the entity value is null, and report a null is used for not-null column. I tried to by-pass it by mapping the column as not-null="false" (changed only the mapping file, not the DB) but it still didn't help, only now it tries to insert the null value to the DB and get ADOException. Any knowledge if NHibernate doesn't support IUseType that convert null to not-null values? Thanks //Implementation public class NullableDateTimeToNotNullUserType : IUserType { private static readonly DateTime MaxDate = new DateTime(9999, 12, 31); public new bool Equals(object x, object y) { //This didn't work as well if (ReferenceEquals(x, y)) return true; //if(x == null && y == null) return false; if (x == null || y == null) return false; return x.Equals(y); } public int GetHashCode(object x) { return x == null ? 0 : x.GetHashCode(); } public object NullSafeGet(IDataReader rs, string[] names, object owner) { var value = rs.GetDateTime(rs.GetOrdinal(names[0])); return (value == MaxDate)? null : value; } public void NullSafeSet(IDbCommand cmd, object value, int index) { var dateValue = (DateTime?)value; var dbValue = (dateValue.HasValue) ? dateValue.Value : MaxDate; ((IDataParameter)cmd.Parameters[index]).Value = dbValue; } public object DeepCopy(object value) { return value; } public object Replace(object original, object target, object owner) { return original; } public object Assemble(object cached, object owner) { return cached; } public object Disassemble(object value) { return value; } public SqlType[] SqlTypes { get { return new[] { NHibernateUtil.DateTime.SqlType }; } } public Type ReturnedType { get { return typeof(DateTime?); } } public bool IsMutable { get { return false; } } } } //Final Implementation with fixes. make the column mapping in hbm.xml not-null="false" public class NullableDateTimeToNotNullUserType : IUserType { private static readonly DateTime MaxDate = new DateTime(9999, 12, 31); public new bool Equals(object x, object y) { //This didn't work as well if (ReferenceEquals(x, y)) return true; //if(x == null && y == null) return false; if (x == null || y == null) return false; return x.Equals(y); } public int GetHashCode(object x) { return x == null ? 0 : x.GetHashCode(); } public object NullSafeGet(IDataReader rs, string[] names, object owner) { var value = NHibernateUtil.Date.NullSafeGet(rs, names[0]); return (value == MaxDate)? default(DateTime?) : value; } public void NullSafeSet(IDbCommand cmd, object value, int index) { var dateValue = (DateTime?)value; var dbValue = (dateValue.HasValue) ? dateValue.Value : MaxDate; NHibernateUtil.Date.NullSafeSet(cmd, valueToSet, index); } public object DeepCopy(object value) { return value; } public object Replace(object original, object target, object owner) { return original; } public object Assemble(object cached, object owner) { return cached; } public object Disassemble(object value) { return value; } public SqlType[] SqlTypes { get { return new[] { NHibernateUtil.DateTime.SqlType }; } } public Type ReturnedType { get { return typeof(DateTime?); } } public bool IsMutable { get { return false; } } } }

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

  • Double Linked List Insertion Sorting Bug

    - by house
    Hello, I have implemented an insertion sort in a double link list (highest to lowest) from a file of 10,000 ints, and output to file in reverse order. To my knowledge I have implemented such a program, however I noticed in the ouput file, a single number is out of place. Every other number is in correct order. The number out of place is a repeated number, but the other repeats of this number are in correct order. Its just strange how this number is incorrectly placed. Also the unsorted number is only 6 places out of sync. I have looked through my program for days now with no idea where the problem lies, so I turn to you for help. Below is the code in question, (side note: can my question be deleted by myself? rather my colleges dont thieve my code, if not how can it be deleted?) void DLLIntStorage::insertBefore(int inValue, node *nodeB) { node *newNode; newNode = new node(); newNode->prev = nodeB->prev; newNode->next = nodeB; newNode->value = inValue; if(nodeB->prev==NULL) { this->front = newNode; } else { nodeB->prev->next = newNode; } nodeB->prev = newNode; } void DLLIntStorage::insertAfter(int inValue, node *nodeB) { node *newNode; newNode = new node(); newNode->next = nodeB->next; newNode->prev = nodeB; newNode->value = inValue; if(nodeB->next == NULL) { this->back = newNode; } else { nodeB->next->prev = newNode; } nodeB->next = newNode; } void DLLIntStorage::insertFront(int inValue) { node *newNode; if(this->front == NULL) { newNode = new node(); this->front = newNode; this->back = newNode; newNode->prev = NULL; newNode->next = NULL; newNode->value = inValue; } else { insertBefore(inValue, this->front); } } void DLLIntStorage::insertBack(int inValue) { if(this->back == NULL) { insertFront(inValue); } else { insertAfter(inValue, this->back); } } ifstream& operator>> (ifstream &in, DLLIntStorage &obj) { int readInt, counter = 0; while(!in.eof()) { if(counter==dataLength) //stops at 10,000 { break; } in >> readInt; if(obj.front != NULL ) { obj.insertion(readInt); } else { obj.insertBack(readInt); } counter++; } return in; } void DLLIntStorage::insertion(int inValue) { node* temp; temp = this->front; if(temp->value >= inValue) { insertFront(inValue); return; } else { while(temp->next!=NULL && temp!=this->back) { if(temp->value >= inValue) { insertBefore(inValue, temp); return; } temp = temp->next; } } if(temp == this->back) { insertBack(inValue); } } Thankyou for your time.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >