Search Results

Search found 11759 results on 471 pages for 'isolation level'.

Page 67/471 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Games development with a game loop that's abstracted away

    - by Davy8
    Most game development happens with a main game loop. Are there any good articles/blog posts/discussions about games without a game loop? I imagine they'd mostly be web games, but I'd be interested in hearing otherwise. (As a side note, I think it's really interesting that the concept is almost exclusively used in gaming as far as I'm aware, perhaps that may be another question.) Edit: I realize there's probably a redraw loop somewhere. I guess what I really mean is a loop that is hidden to you. Frames are something you as the developer are not concerned with as you're working on a higher level of abstraction. E.g. someLootItem.moveTo(inventory, someAnimatationType) and that will move from the loot box to your inventory using the specified animation type without the game developer having to worry about the implementation details of that animation. Maybe that's how "real" games end up working, but from reading most tutorials they seem to imply a much more granular level of control is used, but that might just be an artifact of being a tutorial. Edit2: I think most people are misunderstanding what I'm trying to ask, likely because I'm having trouble describing exactly what I'm trying to ask. After some more thinking perhaps what I'm referring to is more along the lines of what I believe is referred to as "scripting" where you're working at a very high level and having some game engine take care of the low level details. For example, take custom maps in Starcraft II or Warcraft III. Many of the "maps" have gameplay that deviates enough from the primary game that they could be considered a separate game written on the same engine. What I'm referring to then is along those lines. I may be wrong because I only dabbed in the Warcraft III editor, but as far as I remember no where in the map editor do you control the game loop, and yet you can create many different games out of it. In my mind, these are games in their own right. If you're playing DotA you don't say you're playing Warcraft III, you say you're playing DotA because that's the actual game you're playing. Such a system may impose limitations that don't exist if you're creating a game from scratch, but it greatly reduces development time because much of the "hard" work has already been done for you. Hopefully that clarifies what I'm asking. Another example of what is I mean, is when you write a web app, of course it communicates through sockets and TCP. But does the average web developer doesn't explicitly write code for connecting sockets. They just need to know about receiving a request and sending a response. There are unique scenarios where you do occasionally need to use raw sockets, but it's generally rare in web development. In a similar fashion, it's very possible to write a game without directly using the game loop, even though one is used behind the scenes. Probably not a AAA title, but there must be hundreds of smaller scale games that can and possibly are written this way. Are there any good resources on writing these "simpler" games?

    Read the article

  • Identity in .NET 4.5&ndash;Part 2: Claims Transformation in ASP.NET (Beta 1)

    - by Your DisplayName here!
    In my last post I described how every identity in .NET 4.5 is now claims-based. If you are coming from WIF you might think, great – how do I transform those claims? Sidebar: What is claims transformation? One of the most essential features of WIF (and .NET 4.5) is the ability to transform credentials (or tokens) to claims. During that process the “low level” token details are turned into claims. An example would be a Windows token – it contains things like the name of the user and to which groups he belongs to. That information will be surfaced as claims of type Name and GroupSid. Forms users will be represented as a Name claim (all the other claims that WIF provided for FormsIdentity are gone in 4.5). The issue here is, that your applications typically don’t care about those low level details, but rather about “what’s the purchase limit of alice”. The process of turning the low level claims into application specific ones is called claims transformation. In pre-claims times this would have been done by a combination of Forms Authentication extensibility, role manager and maybe ASP.NET profile. With claims transformation all your identity gathering code is in one place (and the outcome can be cached in a single place as opposed to multiple ones). The structural class to do claims transformation is called ClaimsAuthenticationManager. This class has two purposes – first looking at the incoming (low level) principal and making sure all required information about the user is present. This is your first chance to reject a request. And second – modeling identity information in a way it is relevant for the application (see also here). This class gets called (when present) during the pipeline when using WS-Federation. But not when using the standard .NET principals. I am not sure why – maybe because it is beta 1. Anyhow, a number of people asked me about it, and the following is a little HTTP module that brings that feature back in 4.5. public class ClaimsTransformationHttpModule : IHttpModule {     public void Dispose()     { }     public void Init(HttpApplication context)     {         context.PostAuthenticateRequest += Context_PostAuthenticateRequest;     }     void Context_PostAuthenticateRequest(object sender, EventArgs e)     {         var context = ((HttpApplication)sender).Context;         // no need to call transformation if session already exists         if (FederatedAuthentication.SessionAuthenticationModule != null &&             FederatedAuthentication.SessionAuthenticationModule.ContainsSessionTokenCookie(context.Request.Cookies))         {             return;         }         var transformer = FederatedAuthentication.FederationConfiguration.IdentityConfiguration.ClaimsAuthenticationManager;         if (transformer != null)         {             var transformedPrincipal = transformer.Authenticate(context.Request.RawUrl, context.User as ClaimsPrincipal);             context.User = transformedPrincipal;             Thread.CurrentPrincipal = transformedPrincipal;         }     } } HTH

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • Cisco ASA: How to route PPPoE-assigned subnet?

    - by Martijn Heemels
    We've just received a fiber uplink, and I'm trying to configure our Cisco ASA 5505 to properly use it. The provider requires us to connect via PPPoE, and I managed to configure the ASA as a PPPoE client and establish a connection. The ASA is assigned an IP address by PPPoE, and I can ping out from the ASA to the internet, but I should have access to an entire /28 subnet. I can't figure out how to get that subnet configured on the ASA, so that I can route or NAT the available public addresses to various internal hosts. My assigned range is: 188.xx.xx.176/28 The address I get via PPPoE is 188.xx.xx.177/32, which according to our provider is our Default Gateway address. They claim the subnet is correctly routed to us on their side. How does the ASA know which range it is responsible for on the Fiber interface? How do I use the addresses from my range? To clarify my config; The ASA is currently configured to default-route to our ADSL uplink on port Ethernet0/0 (interface vlan2, nicknamed Outside). The fiber is connected to port Ethernet0/2 (interface vlan50, nicknamed Fiber) so I can configure and test it before making it the default route. Once I'm clear on how to set it all up, I'll fully replace the Outside interface with Fiber. My config (rather long): : Saved : ASA Version 8.3(2)4 ! hostname gw domain-name example.com enable password ****** encrypted passwd ****** encrypted names name 10.10.1.0 Inside-dhcp-network description Desktops and clients that receive their IP via DHCP name 10.10.0.208 svn.example.com description Subversion server name 10.10.0.205 marvin.example.com description LAMP development server name 10.10.0.206 dns.example.com description DNS, DHCP, NTP ! interface Vlan2 description Old ADSL WAN connection nameif outside security-level 0 ip address 192.168.1.2 255.255.255.252 ! interface Vlan10 description LAN vlan 10 Regular LAN traffic nameif inside security-level 100 ip address 10.10.0.254 255.255.0.0 ! interface Vlan11 description LAN vlan 11 Lab/test traffic nameif lab security-level 90 ip address 10.11.0.254 255.255.0.0 ! interface Vlan20 description LAN vlan 20 ISCSI traffic nameif iscsi security-level 100 ip address 10.20.0.254 255.255.0.0 ! interface Vlan30 description LAN vlan 30 DMZ traffic nameif dmz security-level 50 ip address 10.30.0.254 255.255.0.0 ! interface Vlan40 description LAN vlan 40 Guests access to the internet nameif guests security-level 50 ip address 10.40.0.254 255.255.0.0 ! interface Vlan50 description New WAN Corporate Internet over fiber nameif fiber security-level 0 pppoe client vpdn group KPN ip address pppoe ! interface Ethernet0/0 switchport access vlan 2 speed 100 duplex full ! interface Ethernet0/1 switchport trunk allowed vlan 10,11,30,40 switchport trunk native vlan 10 switchport mode trunk ! interface Ethernet0/2 switchport access vlan 50 speed 100 duplex full ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 switchport access vlan 20 ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! boot system disk0:/asa832-4-k8.bin ftp mode passive clock timezone CEST 1 clock summer-time CEDT recurring last Sun Mar 2:00 last Sun Oct 3:00 dns domain-lookup inside dns server-group DefaultDNS name-server dns.example.com domain-name example.com same-security-traffic permit inter-interface same-security-traffic permit intra-interface object network inside-net subnet 10.10.0.0 255.255.0.0 object network svn.example.com host 10.10.0.208 object network marvin.example.com host 10.10.0.205 object network lab-net subnet 10.11.0.0 255.255.0.0 object network dmz-net subnet 10.30.0.0 255.255.0.0 object network guests-net subnet 10.40.0.0 255.255.0.0 object network dhcp-subnet subnet 10.10.1.0 255.255.255.0 description DHCP assigned addresses on Vlan 10 object network Inside-vpnpool description Pool of assignable addresses for VPN clients object network vpn-subnet subnet 10.10.3.0 255.255.255.0 description Address pool assignable to VPN clients object network dns.example.com host 10.10.0.206 description DNS, DHCP, NTP object-group service iscsi tcp description iscsi storage traffic port-object eq 3260 access-list outside_access_in remark Allow access from outside to HTTP on svn. access-list outside_access_in extended permit tcp any object svn.example.com eq www access-list Insiders!_splitTunnelAcl standard permit 10.10.0.0 255.255.0.0 access-list iscsi_access_in remark Prevent disruption of iscsi traffic from outside the iscsi vlan. access-list iscsi_access_in extended deny tcp any interface iscsi object-group iscsi log warnings ! snmp-map DenyV1 deny version 1 ! pager lines 24 logging enable logging timestamp logging asdm-buffer-size 512 logging monitor warnings logging buffered warnings logging history critical logging asdm errors logging flash-bufferwrap logging flash-minimum-free 4000 logging flash-maximum-allocation 2000 mtu outside 1500 mtu inside 1500 mtu lab 1500 mtu iscsi 9000 mtu dmz 1500 mtu guests 1500 mtu fiber 1492 ip local pool DHCP_VPN 10.10.3.1-10.10.3.20 mask 255.255.0.0 ip verify reverse-path interface outside no failover icmp unreachable rate-limit 10 burst-size 5 asdm image disk0:/asdm-635.bin asdm history enable arp timeout 14400 nat (inside,outside) source static any any destination static vpn-subnet vpn-subnet ! object network inside-net nat (inside,outside) dynamic interface object network svn.example.com nat (inside,outside) static interface service tcp www www object network lab-net nat (lab,outside) dynamic interface object network dmz-net nat (dmz,outside) dynamic interface object network guests-net nat (guests,outside) dynamic interface access-group outside_access_in in interface outside access-group iscsi_access_in in interface iscsi route outside 0.0.0.0 0.0.0.0 192.168.1.1 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa-server SBS2003 protocol radius aaa-server SBS2003 (inside) host 10.10.0.204 timeout 5 key ***** aaa authentication enable console SBS2003 LOCAL aaa authentication ssh console SBS2003 LOCAL aaa authentication telnet console SBS2003 LOCAL http server enable http 10.10.0.0 255.255.0.0 inside snmp-server host inside 10.10.0.207 community ***** version 2c snmp-server location Server room snmp-server contact [email protected] snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart snmp-server enable traps syslog crypto ipsec transform-set TRANS_ESP_AES-256_SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set TRANS_ESP_AES-256_SHA mode transport crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group5 crypto dynamic-map outside_dyn_map 20 set transform-set TRANS_ESP_AES-256_SHA crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet 10.10.0.0 255.255.0.0 inside telnet timeout 5 ssh scopy enable ssh 10.10.0.0 255.255.0.0 inside ssh timeout 5 ssh version 2 console timeout 30 management-access inside vpdn group KPN request dialout pppoe vpdn group KPN localname INSIDERS vpdn group KPN ppp authentication pap vpdn username INSIDERS password ***** store-local dhcpd address 10.40.1.0-10.40.1.100 guests dhcpd dns 8.8.8.8 8.8.4.4 interface guests dhcpd update dns interface guests dhcpd enable guests ! threat-detection basic-threat threat-detection scanning-threat threat-detection statistics host number-of-rate 2 threat-detection statistics port number-of-rate 3 threat-detection statistics protocol number-of-rate 3 threat-detection statistics access-list threat-detection statistics tcp-intercept rate-interval 30 burst-rate 400 average-rate 200 ntp server dns.example.com source inside prefer webvpn group-policy DfltGrpPolicy attributes vpn-tunnel-protocol IPSec l2tp-ipsec group-policy Insiders! internal group-policy Insiders! attributes wins-server value 10.10.0.205 dns-server value 10.10.0.206 vpn-tunnel-protocol IPSec l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value Insiders!_splitTunnelAcl default-domain value example.com username martijn password ****** encrypted privilege 15 username marcel password ****** encrypted privilege 15 tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group Insiders! type remote-access tunnel-group Insiders! general-attributes address-pool DHCP_VPN authentication-server-group SBS2003 LOCAL default-group-policy Insiders! tunnel-group Insiders! ipsec-attributes pre-shared-key ***** ! class-map global-class match default-inspection-traffic class-map type inspect http match-all asdm_medium_security_methods match not request method head match not request method post match not request method get ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map type inspect http http_inspection_policy parameters protocol-violation action drop-connection policy-map global-policy class global-class inspect dns inspect esmtp inspect ftp inspect h323 h225 inspect h323 ras inspect http inspect icmp inspect icmp error inspect mgcp inspect netbios inspect pptp inspect rtsp inspect snmp DenyV1 ! service-policy global-policy global smtp-server 123.123.123.123 prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily hpm topN enable Cryptochecksum:a76bbcf8b19019771c6d3eeecb95c1ca : end asdm image disk0:/asdm-635.bin asdm location svn.example.com 255.255.255.255 inside asdm location marvin.example.com 255.255.255.255 inside asdm location dns.example.com 255.255.255.255 inside asdm history enable

    Read the article

  • log file is not getting created using JDK logging with Commons-logging

    - by Saida Dhanavath
    When I run the TestJcLLoggingService class log messages are coming to Console but no log file is created, please help me if you know the answer. two source files are pasted below. TestJcLLoggingService.java package com.amadeus.psp.pasd.logging; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.stereotype.Service; @Service public class TestJCLLoggingService { private static Log psp_log = LogFactory.getLog(TestJCLLoggingService.class); public static String testJCLLoggingServiceMethod(){ psp_log.info("start of method testJCLLoggingServiceMethod class TestJCLLoggingService"); psp_log.info("start of method testJCLLoggingServiceMethod class TestJCLLoggingService"); return "This is a test string for JCLLogging"; } public static void main(String[] args){ testJCLLoggingServiceMethod(); } } logging.properties handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler .level = ALL com.amadeus.psp.pasd.level=ALL java.util.logging.ConsoleHandler.level = ALL java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.level=ALL java.util.logging.FileHandler.limit=50000 java.util.logging.FileHandler.count=1 java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter java.util.logging.FileHandler.append=true Thanks in advance.

    Read the article

  • OAuthWebSecurity.VerifyAuthentication IsSuccessful returns false how do I to determine the reason?

    - by Eonasdan
    I'm using DotNetOpenAuth with a MVC 4 application. All the sudden Google auth is failing (MS is working). The stock code does this: [AllowAnonymous] public ActionResult ExternalLoginCallback(string returnUrl) { var result = OAuthWebSecurity.VerifyAuthentication(Url.Action("ExternalLoginCallback", new { ReturnUrl = returnUrl })); if (!result.IsSuccessful) { return RedirectToAction("ExternalLoginFailure"); } I know that result.IsSuccessful is false, but how do I get the reason? result.Error is null. I also looked at this page to use log4net. I do get a log on the local dev box but not when I deploy it to a remote server. log4net webconfig: <log4net> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="RelyingParty.log" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="100KB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date (GMT%date{%z}) [%thread] %-5level %logger - %message%newline" /> </layout> </appender> <!-- Setup the root category, add the appenders and set the default level --> <root> <level value="INFO" /> <appender-ref ref="RollingFileAppender" /> </root> <!-- Specify the level for some specific categories --> <logger name="DotNetOpenAuth"> <level value="ALL" /> </logger> </log4net> Edit I also tried log4net to a sql db but it still didn't log anything

    Read the article

  • How to configure replication? - This database is not enabled for publication.

    - by truthseeker
    Hi, I'm trying to configure repication on SQL Server 2005. I can done it using wizard. But when I'm trying to run generated scripts by this wizard the error message appears: Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addpublication, Line 159 This database is not enabled for publication. Msg 18757, Level 16, State 1, Procedure sp_MSrepl_addpublication_snapshot, Line 66 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addarticle, Line 168 This database is not enabled for publication. Msg 14294, Level 16, State 1, Procedure sp_verify_job_identifiers, Line 25 Supply either @job_id or @job_name to identify the job. It's a bit strange, because when I'm running this query on database where I clicked and then removed publication, everyting is going well. The problem is when I'm using my query on new database. What is more I'm using sp_replicationdboption stored procedure. When I'm tryin to run it, it says: The replication option 'publish' of database 'ReplicationTest00' has already been set to true. Please help me resolve this issue.

    Read the article

  • TDD - testing business rules/validation in ASP.NET MVC

    - by csetzkorn
    Hi, I am using the sharp architecture so I can easily use mocks etc. in my unit tests and/or during TDD. I have quite complicated business rules and would like to test them at the controller level. I am just wondering how other people do this? For me validation tests business rules at three levels: (1) Property level (e.g. property is required) (2) Intra property level (e.g. start date < end date) (3) Persistence level (e.g. name is unique, parent cannot be child of child) My validation framework also assigns errors to properties. I am just wondering what other people do? Do you write a test for each business rule and check whether the correct error message is assigned to the correct property (i.e. looking at the ASP.MVC ModelState)? I hope my question makes sense. Thanks a lot! Best wishes, Christian

    Read the article

  • log4net: Creating a logger dynamically, should I worry about anything?

    - by vtortola
    Hi, I need to create loggers dynamically, so with a post from here and the help of reflector I have managed to create loggers dynamically, but I'd like to know if I should worry about something else ... I don't know wich implications can have do it. public static ILog GetDyamicLogger(Guid applicationId) { Hierarchy hierarchy = (Hierarchy)LogManager.GetRepository(); RollingFileAppender roller = new RollingFileAppender(); roller.LockingModel = new log4net.Appender.FileAppender.MinimalLock(); roller.AppendToFile = true; roller.RollingStyle = RollingFileAppender.RollingMode.Composite; roller.MaxSizeRollBackups = 14; roller.MaximumFileSize = "15000KB"; roller.DatePattern = "yyyyMMdd"; roller.Layout = new log4net.Layout.PatternLayout(); roller.File = "App_Data\\Logs\\"+applicationId.ToString()+"\\debug.log"; roller.StaticLogFileName = true; PatternLayout patternLayout = new PatternLayout(); patternLayout.ConversionPattern = "%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"; patternLayout.ActivateOptions(); roller.Layout = patternLayout; roller.ActivateOptions(); hierarchy.Root.AddAppender(roller); hierarchy.Root.Level = Level.All; hierarchy.Configured = true; DummyLogger dummyILogger = new DummyLogger(applicationId.ToString()); dummyILogger.Hierarchy = hierarchy; dummyILogger.Level = log4net.Core.Level.All; dummyILogger.AddAppender(roller); return new LogImpl(dummyILogger); } internal sealed class DummyLogger : Logger { // Methods internal DummyLogger(string name) : base(name) { } } Cheers.

    Read the article

  • Deploying Application with mvc in shared hosting server

    - by ankita-13-3
    We have created an MVC web application in asp.net 3.5, it runs absolutely fine locally but when we deploy it on godaddy hosting server (shared hosting), it shows an error which is related to trust level problem. We contacted godaddy support and they say, that we only support medium trust level application. So how to convert my application in medium trust level. Do I need to make changes to web.config file. It shows the following error : Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SecurityException: Request failed.] System.Security.CodeAccessSecurityEngine.ThrowSecurityException(Assembly asm, PermissionSet granted, PermissionSet refused, RuntimeMethodHandle rmh, SecurityAction action, Object demand, IPermission permThatFailed) +150 System.Security.CodeAccessSecurityEngine.ThrowSecurityException(Object assemblyOrString, PermissionSet granted, PermissionSet refused, RuntimeMethodHandle rmh, SecurityAction action, Object demand, IPermission permThatFailed) +100 System.Security.CodeAccessSecurityEngine.CheckSetHelper(PermissionSet grants, PermissionSet refused, PermissionSet demands, RuntimeMethodHandle rmh, Object assemblyOrString, SecurityAction action, Boolean throwException) +284 System.Security.PermissionSetTriple.CheckSetDemand(PermissionSet demandSet, PermissionSet& alteredDemandset, RuntimeMethodHandle rmh) +69 System.Security.PermissionListSet.CheckSetDemand(PermissionSet pset, RuntimeMethodHandle rmh) +150 System.Security.PermissionListSet.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet) +30 System.Threading.CompressedStack.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet) +40 System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant, CompressedStack securityContext) +123 System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant, Resolver accessContext) +41 Look forward to your help. Regards Ankita Software Developer Shakti Informatics Pvt. Ltd. Web Template Hub

    Read the article

  • xml appending issue

    - by 3gwebtrain
    Any one help me? i have the xml with 2 steps. example : Type of Company: Architects may be self-employed. Workspace – Indoors/outdoors: Architects work both. Environment Travel: Architects often visit construction sites to review the progress of projects. People: They work a lot with other professionals involved in the construction project including engineers, contractors, surveyors and landscape architects. Casual: They usually work in a casual and comfortable environment. Hours: The hours are varied based on the project they are working on. Physically demanding: They stand on their feet. Tools: Computers - Architects Assist clients in obtaining construction bids Observe, inspect and monitor building work in my funcation i am using "list.each" to append to ul+index. it works fine. And my problem is while i append the "list.each", the "sublistgroup" should not append to "list.each", insted the "sublistgroup" need to make "ul" and in the ul i need the "sublist" became childrens.. my code is here... i konw that, i am doing some wrong way.. pls any one correct it and let me know.. $(function(){ $.get('career-utility.xml',function(myData){ $(myData).find('listgroup').each(function(index){ var count = index; var listGroup = $(this); var listGroupTitle = $(this).attr('title'); var shortNote = $(this).attr('shortnote'); var subLink = $(this).find('sublist'); var firstList = $(this).find('list'); $('.grouplist').append('<div class="list-group"><h3>'+listGroupTitle+'</h3><ul class="level-one level' + count + '"></ul></div>'); firstList.each(function(listnum){ var subList = $(this).text(); var subListLeveltwo = $(this).find('sublist').text(); if(subListLeveltwo==''){ $('<li>'+subList+'</li>').appendTo('ul.level'+count+''); } else{ $('<li class="new">'+subList+'</li>').appendTo('ul.level'+count+''); } }) }) }) })

    Read the article

  • Internal bug tracking tickets - Redmine, Trac, or JIRA

    - by Tai Squared
    I've been looking at setting up Redmine, Trac, or JIRA to track issues. I want to be able to have my development team create internal tickets that are never seen by clients, while clients can create/edit tickets that are seen by the internal team. From the Trac documentation, you can set permissions to create or view tickets, but it doesn't seem to allow for viewing only certain tickets. It may be possible with Trac Fine Grained Permissions, but doesn't appear so. The Redmine documentation mentions: Define your own roles and set their permissions in a click but doesn't appear to have the level of granularity. From the JIRA documentation: At the moment JIRA is only able to support security at a project level or issue level. Currently there is no field level security available. According to this question, Redmine doesn't support internal tickets, so you would have to use multiple projects. I don't want a situation where I would have to create multiple projects - one internal, one external and have the external tickets brought into the internal repository. It seems as this would lead to unnecessary overhead and inevitably, the projects wouldn't be in sync. Is there any way with any of these products (possibly through a plug-in if not in the core product itself) to specify these permissions, or simplify having two projects with different users and permissions that must still share information?

    Read the article

  • multi-part identifier could not be bound error

    - by vishal Shah
    Here is my query: IF OBJECT_ID('NPWAS1513.dbo.usp_MSPEX_QLK_Billing_Fact_Load') IS NOT NULL DROP PROCEDURE dbo.usp_MSPEX_QLK_Billing_Fact_Load; GO CREATE PROCEDURE usp_MSPEX_QLK_Billing_Fact_Load @create_timestamp datetime, @update_timestamp datetime, @create_user varchar(50), @update_user varchar(50), @dbProdServ varchar(50) AS print 'dbProdServ is:'+ @dbProdServ; print 'current_user is:' +@current_user; DECLARE @sSQL AS VARCHAR(MAX); SET @sSQL = '' SET @sSQL = 'set identity_insert ' + @dbProdServ + '.mspex_qlk_billing_fact ON' EXEC(@sSQL); SET @sSQL = 'INSERT INTO ' + @dbProdServ +'.mspex_qlk_billing_fact (project_id, billing_year, billing_month, billing_month_desc, billing_date_id, projected_bill_amount, billed_year_to_date_amount, billed_inception_to_date_amount, remaining_bill_amount, actual_billed_amount, current_billing_percent, previous_billing_percent, billing_pct_diff, billing_type, final_bill_ind, last_in_progress_date, current_record_ind, load_time_stamp, total_contract_period, contract_period_current_year, partial_bill, create_timestamp, create_user, update_timestamp, update_user) SELECT project_dim.project_id, billing_final_data.billingyear, billing_final_data.billingmonth, billing_final_data.billingmonthdesc, Time_Dim.Time_ID, billing_final_data.projected_bill_amount, billing_final_data.billed_year_todate_amount, billing_final_data.billed_inception_todate_amount, billing_final_data.remaining_bill_amount, billing_final_data.actual_billed_amount, billing_final_data.current_billing_percent, billing_final_data.previous_billing_percent, billing_final_data.billing_pct_diff, billing_final_data.billing_type, billing_final_data.final_bill_ind, billing_final_data.last_in_progress_date, billing_final_data.current_record_ind, billing_final_data.load_time_stamp, billing_final_data.[Total Contract Period], billing_final_data.[Contract Period Current Year], billing_final_data.[Partial Bill],'''+ CAST(@create_timestamp as varchar(max)) + ''',''' + @create_user + ''','''+ CAST(@update_timestamp as varchar(50)) +''','''+ @update_user + ''' FROM '+ @dbProdServ +'.mspex_qlk_project_dim project_dim,'+ @dbProdServ +'.mspex_rpt_billing_final_data billing_final_data,'+ @dbProdServ + '.MSPEX_QLK_Time_Dim Time_Dim WHERE project_dim.myproject_project_uid = billing_final_data.projectuid AND'''+ convert(datetime, cast(billing_final_data.[BillingMonth] as nvarchar(2)) + '''/01/''' + cast(billing_final_data.[billingyear] as nvarchar(4)), 101) +''' + = Time_Dim.Time_Date'; BEGIN TRANSACTION EXEC(@sSQL) COMMIT TRANSACTION I get the error msg: Msg 4104, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 23 The multi-part identifier "mspex_rpt_billing_final_data.BillingMonth" could not be bound. Msg 4104, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 23 The multi-part identifier "billing_final_data.billingyear" could not be bound. Msg 207, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 83 Invalid column name 'BillingMonth'. Msg 207, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 84 Invalid column name 'billingyear'. I checked the column names, etc. and things are fine. In fact, I directly dragged the table name and column name to ensure that it is correct. Please help ASAP. Calling me at cell at 630-338-9427 would be great. But an URGENT response is absolutely necessary. Thanks guys.

    Read the article

  • Dispatch request to an Async Servlet from managed bean generate exception

    - by Thang Pham
    when a button click, I need to have stuff running in my background, so I have a async Servlet. From my managed bean, if I do redirect, it works great (meaning that it execute my run() method inside my class that extends Runnable correctly). Like this String url = externalContext.getRequestContextPath() + "/ReportExecutionServlet"; externalContext.redirect(url); But if I switch to dispatch, like this externalContext.redirect("/ReportExecutionServlet"); it fail when I try to obtain the AsyncContext AsyncContext aCtx = request.startAsync(request, response); The error is below Caused By: java.lang.IllegalStateException: The async-support is disabled on this request: weblogic.servlet.internal.ServletRequestImpl Any idea how to fix this please? NOTE: This is how to execute my async servlet, just in case: AsyncContext aCtx = request.startAsync(request, response); //delegate long running process to an "async" thread aCtx.addListener(new AsyncListener() { @Override public void onComplete(AsyncEvent event) throws IOException { logger.log(Level.INFO, "ReportExecutionServlet handle async request - onComplete"); } @Override public void onTimeout(AsyncEvent event) throws IOException { logger.log(Level.WARNING, "ReportExecutionServlet handle async request - onTimeout"); } @Override public void onError(AsyncEvent event) throws IOException { logger.log(Level.SEVERE, "ReportExecutionServlet handle async request - onError"); } @Override public void onStartAsync(AsyncEvent event) throws IOException { logger.log(Level.INFO, "ReportExecutionServlet handle async request - onStartAsync"); } }); // Start another service ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(10); executor.execute(new AsyncRequestReportProcessor(aCtx));

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • adapting combination code for larger list

    - by vbNewbie
    I have the following code to generate combinations of string for a small list and would like to adapt this for a large list of over 300 string words.Can anyone suggest how to alter this code or to use a different method. Public Class combinations Public Shared Sub main() Dim myAnimals As String = "cat dog horse ape hen mouse" Dim myAnimalCombinations As String() = BuildCombinations(myAnimals) For Each combination As String In myAnimalCombinations 'Look on the Output Tab for the results! Console.WriteLine("(" & combination & ")") Next combination Console.ReadLine() End Sub Public Shared Function BuildCombinations(ByVal inputString As String) As String() 'Separate the sentence into useable words. Dim wordsArray As String() = inputString.Split(" ".ToCharArray) 'A plase to store the results as we build them Dim returnArray() As String = New String() {""} 'The 'combination level' that we're up to Dim wordDistance As Integer = 1 'Go through all the combination levels... For wordDistance = 1 To wordsArray.GetUpperBound(0) 'Go through all the words at this combination level... For wordIndex As Integer = 0 To wordsArray.GetUpperBound(0) - wordDistance 'Get the first word of this combination level Dim combination As New System.Text.StringBuilder(wordsArray(wordIndex)) 'And all all the remaining words a this combination level For combinationIndex As Integer = 1 To wordDistance combination.Append(" " & wordsArray(wordIndex + combinationIndex)) Next combinationIndex 'Add this combination to the results returnArray(returnArray.GetUpperBound(0)) = combination.ToString 'Add a new row to the results, ready for the next combination ReDim Preserve returnArray(returnArray.GetUpperBound(0) + 1) Next wordIndex Next wordDistance 'Get rid of the last, blank row. ReDim Preserve returnArray(returnArray.GetUpperBound(0) - 1) 'Return combinations to the calling method. Return returnArray End Function End Class

    Read the article

  • Android ExpandableListView From Database

    - by SterAllures
    I want to create a two level ExpandableListView from a local database. The group level I want to get the names from the database Category table category_id | name ------------------- 1 | NameOfCategory the children level I want to get the names from the List table list_id | name | foreign_category_id -------------------------------- 1 | Listname | 1 I've got a method in DatabaseDAO to get all the values from the table public List<Category> getAllCategories() { database = dbHelper.getReadableDatabase(); List<Category> categories = new ArrayList<Category>(); Cursor cursor = database.query(SQLiteHelper.TABLE_CATEGORY, null, null, null, null, null, null); cursor.moveToFirst(); while(!cursor.isAfterLast()) { Category category = cursorToCategory(cursor); categories.add(category); cursor.moveToNext(); } cursor.close(); return categories; } Now adding the names to the group level is easy I do it the following way: ArrayList<String> groups; for(Category c : databaseDao.getAllCategories()) { groups.add(c.getName()); } Now I want to add the children to the ExpandableListView in the following array ArrayList<ArrayList<ArrayList<String>>> children;. How do I get the children under the correct group name? I think it has to do somenthing with groups.indexOf() but in the list table I only have a foreign category_id and not the actual name.

    Read the article

  • Logback: What would cause DEBUG - WARN to append to file but NOT ERROR?

    - by Zombies
    I am running a Java program from Eclipe, and I am using the logback console plugin. I can see the ERROR level statements being appended to console (as well as all of the others). But for some reason my file, which is correctly recieving new DEBUG-WARN statements, is NOT recieving the ERROR level ones. Here is my logback.xml: <consolePlugin /> <appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>logs/main.log</file> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</Pattern> </layout> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <layout class="ch.qos.logback.classic.PatternLayout"> <Pattern>%msg%n</Pattern> </layout> </appender> <logger name="WebsiteChecker"> <appender-ref ref="FILE" /> </logger> <root> <level value="debug" /> <!--<appender-ref ref="STDOUT" />--> <appender-ref ref="FILE" /> </root> </configuration>

    Read the article

  • Why is log4j not behaving as expected?

    - by Kieveli
    I have a co-worker who is trying to get log4j to behave as follows: Log to Stdout By default, disable most output Show only messages from java.sql.PrepareStatement at level debug and up He's getting caught up in the 'level' vs 'priority'. Here is his config file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE log4j:configuration SYSTEM "D:/Java/apache-log4j-1.2.15/src/main/resources/org/apache/log4j/xml/log4j.dtd" > <log4j:configuration> <!-- Appenders --> <appender name="stdout" class="org.apache.log4j.ConsoleAppender"> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%5p %d{ISO8601} [%t][%x] %c - %m%n" /> </layout> </appender> <!-- Loggers for ibatus and JDBC database --> <logger name="java.sql.PreparedStatement"> <level value="debug"/> </logger> <!-- The Root Logger --> <root> <level value="error"/> <appender-ref ref="stdout"/> </root> </log4j:configuration> The output from this shows no messages in the log output. How does he need to change his log4j.xml config file to make it behave as he's expecting?

    Read the article

  • MySQL Ratings From Two Tables

    - by DirtyBirdNJ
    I am using MySQL and PHP to build a data layer for a flash game. Retrieving lists of levels is pretty easy, but I've hit a roadblock in trying to fetch the level's average rating along with it's pointer information. Here is an example data set: levels Table: level_id | level_name 1 | Some Level 2 | Second Level 3 | Third Level ratings Table: rating_id | level_id | rating_value 1 | 1 | 3 2 | 1 | 4 3 | 1 | 1 4 | 2 | 3 5 | 2 | 4 6 | 2 | 1 7 | 3 | 3 8 | 3 | 4 9 | 3 | 1 I know this requires a join, but I cannot figure out how to get the average rating value based on the level_id when I request a list of levels. This is what I'm trying to do: SELECT levels.level_id, AVG(ratings.level_rating WHERE levels.level_id = ratings.level_id) FROM levels I know my SQL is flawed there, but I can't figure out how to get this concept across. The only thing I can get to work is returning a single average from the entire ratings table, which is not very useful. Ideal Output from the above conceptually valid but syntactically awry query would be: level_id | level_rating 1| 3.34 2| 1.00 3| 4.54 My main issue is I can't figure out how to use the level_id of each response row before the query has been returned. It's like I want to use a placeholder... or an alias... I really don't know and it's very frustrating. The solution I have in place now is an EPIC band-aid and will only cause me problems long term... please help!

    Read the article

  • In Corona SDK the background image always cover other images

    - by user1446126
    I'm currently making a tower defense game with Corona SDK. However, while I'm making the gaming scene, The background scene always cover the monster spawn, I've tried background:toBack() ,however it's doesn't work.Here is my code: module(..., package.seeall) function new() local localGroup = display.newGroup(); local level=require(data.levelSelected); local currentDes = 1; monsters_list = display.newGroup() --The background local bg = display.newImage ("image/levels/1/bg.png"); bg.x = _W/2;bg.y = _H/2; bg:toBack(); --generate the monsters function spawn_monster(kind) local monster=require("monsters."..kind); newMonster=monster.new() --read the spawn(starting point) in level, and spawn the monster there newMonster.x=level.route[1][1];newMonster.y=level.route[1][2]; monsters_list:insert(newMonster); localGroup:insert(monsters_list); return monsters_list; end function move(monster,x,y) -- Using pythagoras to calauate the moving distace, Hence calauate the time consumed according to speed transition.to(monster,{time=math.sqrt(math.abs(monster.x-x)^2+math.abs(monster.y-y)^2)/(monster.speed/30),x=x, y=y, onComplete=newDes}) end function newDes() currentDes=currentDes+1; end --moake monster move according to the route function move_monster() for i=1,monsters_list.numChildren do move(monsters_list[i],200,200); print (currentDes); end end function agent() spawn_monster("basic"); end --Excute function above. timer2 = timer.performWithDelay(1000,agent,10); timer.performWithDelay(100,move_monster,-1); timer.performWithDelay(10,update,-1); move_monster(); return localGroup; end and the monster just stuck at the spawn point and stay there. but, When i comment these 3 lines of code: --local bg = display.newImage ("image/levels/1/bg.png"); --bg.x = _W/2;bg.y = _H/2; --bg:toBack(); The problem disappear Any ideas??Thanks for helping

    Read the article

  • How to calculate commision based on referred memebrs?

    - by RAJKISHOR SAHU
    Hello everybody, I am developing a small software in C# WPF for a consultancy which does chain system business. I have coded tree structure to show who referred whom. Now it has commission depending on level. If 1 referred 2 & 3 then 1 will get level-1 commission. If 2 referred 4, 5 & 3 referred 6, 7 then 1 will receive level-2 commission. This chain will go on to certain total number. My problem is how I would implement this logic; I am able to calculate who has referred how many members via UDF written for adding TreeViewItem to TreeView. Or tell me how I can count items in treeview in certain level? Node adding UDF: public void AddNodes(int uid, TreeViewItem tSubNode) { string query = "select fullname, id from members where refCode=" + uid + ";"; MySqlCommand cmd = new MySqlCommand(query, db.conn); MySqlDataAdapter _DA = new MySqlDataAdapter(cmd); DataTable _DT = new DataTable(); tSubNode.IsExpanded = true; _DA.Fill(_DT); foreach (DataRow _dr in _DT.Rows) { TreeViewItem tNode = new TreeViewItem(); tNode.Header = _dr["fullname"].ToString()+" ("+_dr["id"].ToString()+")"; tSubNode.Items.Add(tNode); if (db.HasMembers(Convert.ToInt32(_dr["id"].ToString()))) { AddNodes(Convert.ToInt32(_dr["id"]), tNode); } } //This line tracks who has referred how many members Console.WriteLine("Tree node Count : "+tSubNode.Items.Count.ToString()+", UID "+uid); } Help me PLEASE!!!!

    Read the article

  • node.js beginner tutorials?

    - by TreyK
    Hey all, I'm working on creating my first real node.js http server, and I'm sort of drowning in it. As a good teacher of mine always said, "I'll just shove you in the water for now, and then I'll show you how to swim." Fortunately, she wasn't a swimming instructor, but it's a good analogy nonetheless. I feel like I've jumped into node.js and I've only found a ping pong ball to help, that is to say, most of the tutorials I've read stop shortly after the "Hello World" example and I've mostly been trying to make sense of copied and pasted code (or they assume I have knowledge of lower level HTTP and webserver concepts that have been done for me as an Apache/PHP developer). I have experience in both client-side Javascript and PHP, but node seems to be a beast all of its own. I don't quite have the low-level knowledge that seems necessary for creating a node server, and connect, which seems to be a nice module for simplifying things, seems quite sparsely explained, even in the docs on its Git. Where could I find some tutorials to help me in this situation? TL;DR - Are there any tutorials for node.js that go beyond "Hello World" but don't require much low-level knowledge? Or any tutorials that explain lower-level HTTP and webserver concepts that I would need to effectively create a node HTTP server? Thanks for any help. -Trey

    Read the article

  • Java: design for using many executors services and only few threads

    - by Guillaume
    I need to run in parallel multiple threads to perform some tests. My 'test engine' will have n tests to perform, each one doing k sub-tests. Each test result is stored for a later usage. So I have n*k processes that can be ran concurrently. I'm trying to figure how to use the java concurrent tools efficiently. Right now I have an executor service at test level and n executor service at sub test level. I create my list of Callables for the test level. Each test callable will then create another list of callables for the subtest level. When invoked a test callable will subsequently invoke all subtest callables test 1 subtest a1 subtest ...1 subtest k1 test n subtest a2 subtest ...2 subtest k2 call sequence: test manager create test 1 callable test1 callable create subtest a1 to k1 testn callable create subtest an to kn test manager invoke all test callables test1 callable invoke all subtest a1 to k1 testn callable invoke all subtest an to kn This is working fine, but I have a lot of new treads that are created. I can not share executor service since I need to call 'shutdown' on the executors. My idea to fix this problem is to provide the same fixed size thread pool to each executor service. Do you think it is a good design ? Do I miss something more appropriate/simple for doing this ?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >