Search Results

Search found 67506 results on 2701 pages for 'management data warehouse'.

Page 200/2701 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • [EF + Oracle] Inserting Data (Sequences) (2/2)

    - by JTorrecilla
    Prologue In the previous chapter we have see how to create DB records with EF, now we are going to Some Questions about Oracle.   ORACLE One characteristic from SQL Server that differs from Oracle is “Identity”. To all that has not worked with SQL Server, this property, that applies to Integer Columns, lets indicate that there is auto increment columns, by that way it will be filled automatically, without writing it on the insert statement. In EF with SQL Server, the properties whose match with Identity columns, will be filled after invoking SaveChanges method. In Oracle DB, there is no Identity Property, but there is something similar. Sequences Sequences are DB objects, that allow to create auto increment, but there are not related directly to a Table. The syntax is as follows: name, min value, max value and begin value. 1: CREATE SEQUENCE nombre_secuencia 2: INCREMENT BY numero_incremento 3: START WITH numero_por_el_que_empezara 4: MAXVALUE valor_maximo | NOMAXVALUE 5: MINVALUE valor_minimo | NOMINVALUE 6: CYCLE | NOCYCLE 7: ORDER | NOORDER 8:    How to get sequence value? To obtain the next val from the sequence: 1: SELECT nb_secuencia.Nextval 2: From Dual Due to there is no direct way to indicate that a column is related to a sequence, there is several ways to imitate the behavior: Use a Trigger (DB), Use Stored Procedures or Functions(…) or my particularly option. EF model, only, imports Table Objects, Stored Procedures or Functions, but not sequences. By that, I decide to create my own extension Method to invoke Next Val from a sequence: 1: public static class EFSequence 2: { 3: public static int GetNextValue(this ObjectContext contexto, string SequenceName) 4: { 5: string Connection = ConfigurationManager.ConnectionStrings["JTorrecillaEntities2"].ConnectionString; 6: Connection=Connection.Substring(Connection.IndexOf(@"connection string=")+19); 7: Connection = Connection.Remove(Connection.Length - 1, 1); 8: using (IDbConnection con = new Oracle.DataAccess.Client.OracleConnection(Connection)) 9: { 10: using (IDbCommand cmd = con.CreateCommand()) 11: { 12: con.Open(); 13: cmd.CommandText = String.Format("Select {0}.nextval from DUAL", SequenceName); 14: return Convert.ToInt32(cmd.ExecuteScalar()); 15: } 16: } 17:  18: } 19: } This Object Context’s extension method are going to invoke a query with the Sequence indicated by parameter. It takes the connection strings from the App settings, removing the meta data, that was created by VS when generated the EF model. And then, returns the next value from the Sequence. The next value of a Sequence is unique, by that, when some concurrent users are going to create records in the DB using the sequence will not get duplicates. This is my own implementation, I know that it could be several ways to do and better ways. If I find any other way, I promise to post it. To use the example is needed to add a reference to the Oracle (ODP.NET) dll.

    Read the article

  • WMI permissions: Select CommandLine, ProcessId FROM Win32_Process returns no data for CommandLine

    - by user57935
    Hi all, I am gathering performance data via WMI and would like to avoid having to use an account in the Administrators group for this purpose. The target machine is running Windows Server 2003 with the latest SP/updates. I've done what I believe to be the appropriate configuration to allow our user access to WMI (similar to what is described here: http://msdn.microsoft.com/en-us/library/aa393266.aspx). Here are the specific steps that were followed: Open Administrative Tools - Computer Management: Under Computer Management (Local) Expand Services and Applications, right click WMI Control and select properties. In the Security tab, expand Root, highlight CIMV2, click Security (near bottom of window); add Performance Monitor Users and enable the options : Enable Account and Remote Enable. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - Right click My Computer and select properties, select the COM security tab, in “Access Permissions” click "Edit Default" select(or add then select) “Performance Monitor Users” group and allow local access and remote access and click ok. In “Launch and Activation Permissions” click “Edit Default” select(or add then select) “Performance Monitor Users” group and allow Local and Remote Launch and Activation Permissions. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - My Computer - DCOM Config - highlight “Windows Management and Instrumentation” right click and select properties, Select the Security tab, Under “Launch and Activation Permissions” select Customize, then click edit, add the “Performance Users Group” and allow local and remote Remote Launch and Remote Activation privileges. I am able to connect remotely via WMI Explorer but when I perform this query: Select CommandLine, ProcessId FROM Win32_Process I get a valid result but every row has an empty CommandLine. If I add the user to the Administrators group and re-run the query, the CommandLine column contains the expected data. It seems there is a permission I am missing somewhere but I am not having much luck tracking it down. Many thanks in advance.

    Read the article

  • WMI permissions: Select CommandLine, ProcessId FROM Win32_Process returns no data for CommandLine

    - by user57935
    I am gathering performance data via WMI and would like to avoid having to use an account in the Administrators group for this purpose. The target machine is running Windows Server 2003 with the latest SP/updates. I've done what I believe to be the appropriate configuration to allow our user access to WMI (similar to what is described here: http://msdn.microsoft.com/en-us/library/aa393266.aspx). Here are the specific steps that were followed: Open Administrative Tools - Computer Management: Under Computer Management (Local) Expand Services and Applications, right click WMI Control and select properties. In the Security tab, expand Root, highlight CIMV2, click Security (near bottom of window); add Performance Monitor Users and enable the options : Enable Account and Remote Enable. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - Right click My Computer and select properties, select the COM security tab, in “Access Permissions” click "Edit Default" select(or add then select) “Performance Monitor Users” group and allow local access and remote access and click ok. In “Launch and Activation Permissions” click “Edit Default” select(or add then select) “Performance Monitor Users” group and allow Local and Remote Launch and Activation Permissions. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - My Computer - DCOM Config - highlight “Windows Management and Instrumentation” right click and select properties, Select the Security tab, Under “Launch and Activation Permissions” select Customize, then click edit, add the “Performance Users Group” and allow local and remote Remote Launch and Remote Activation privileges. I am able to connect remotely via WMI Explorer but when I perform this query: Select CommandLine, ProcessId FROM Win32_Process I get a valid result but every row has an empty CommandLine. If I add the user to the Administrators group and re-run the query, the CommandLine column contains the expected data. It seems there is a permission I am missing somewhere but I am not having much luck tracking it down. Many thanks in advance.

    Read the article

  • Zabbix Proxy not collecting data

    - by Jordan Eunson
    I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • How to deduplicate 40TB of data?

    - by Michael Stauffer
    I've inherited a research cluster with ~40TB of data across three filesystems. The data stretches back almost 15 years, and there are most likely a good amount of duplicates as researchers copy each others data for different reasons and then just hang on to the copies. I know about de-duping tools like fdupes and rmlint. I'm trying to find one that will work on such a large dataset. I don't care if it takes weeks (or maybe even months) to crawl all the data - I'll probably throttle it anyway to go easy on the filesystems. But I need to find a tool that's either somehow super efficient with RAM, or can store all the intermediary data it needs in files rather than RAM. I'm assuming that my RAM (64GB) will be exhausted if I crawl through all this data as one set. I'm experimenting with fdupes now on a 900GB tree. It's 25% of the way through and RAM usage has been slowly creeping up the whole time, now it's at 700MB. Or, is there a way to direct a process to use disk-mapped RAM so there's much more available and it doesn't use system RAM? I'm running CentOS 6.

    Read the article

  • Zabbix Proxy not collecting data

    - by syntaxcollector
    Hi All I have a working Zabbix 1.8.2 server collecting data for our office and our colo facility. However the link between the colo and office is flaky. What I'm trying to do is setup a proxy on the colo side to have a 1 hour cache and relay the data to our primary server at the office. Our zabbix server is compiled from source and uses a mysql database I've followed the instructions found in the zabbix documentation to compile the proxy using a sqlite3 database. I add the proxy to zabbix under Administration-DM-Proxies. The zabbix server "sees" the proxy because the "last seen" field is always under 60s. However when I assign a colo host to the proxy I stop receiving data from it. The colo host's zabbix_agentd.log file says this: 29343:20100622:124847 Timeout while answering request 29343:20100622:124847 Getting list of active checks failed. Will retry after 60 seconds The zabbix_proxy.log says this. 2041:20100622:123131.760 Deleted 0 records from history [0.000994 seconds] 2028:20100622:124131.671 Error while receiving answer from server [ZBX_TCP_READ() failed I also am unable to receive any SNMP data which is more important to me than the zabbix agent data. Has anyone had this problem before? Zabbix Server OS: CentOS5.4 Zabbix Server Build: 1.8.2 from source Zabbix Proxy OS: CentOS5.4 Zabbix Proxy Build: 1.8.2 from source P.S. The SQLite database on the zabbix proxy never gets any data written to it, it is identical to when I created it from the blank schema in zabbix-1.8.2/create/schema. (Yes I've checked the permissions)

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • Did chkdsk make it harder to restore files?

    - by neyl
    My friend asked me to try and fix his loaded Sansa Clip + which wasn't playing. After opening it in MSC mode I discovered that the Music directory was empty and total of all files was only a few MB. However Disk properties showed me that it was 7Gb full. I then ran Tools - Error Checking and Windows dutifully informed me that disk was corrupt and I should run again Allowing Windows to Fix Errors. I did that and it told me everything was fixed and that all files were placed in FOUND.000 Dir. FOUND.000 was about 7.5 GB with FILE0000-1546 . CHK. (I am aware of methods like ChkBack to scan and convert to mp3 etc BUT Original filenames and structure needed!) Now I started getting worried that I made things worse! I have plenty of experience with Data Recovery Programs - Recuva, Restore My Files etc. and I was anyhow planning to use them to scan the drive. But NOW after CHKDSK "fixed" the drive maybe it modified critical FAT information vital for data recovery. So I run these programs and 0!!!. No trace of files! I tried a ton of Recovery Programs with same results TILL EaseUS Data Recovery Wizard found all files and I purchased program for $55! My Question In your opinion - did running CHKDSK with automatic fixing of errors make matters worse (i.e. many data recovery progs. didn't find a trace and they would have done if not for chkdsk) or was the filesystem too corrupt anyhow for regular File Recovery Progs.? If I would be a Professional - would I be responsible for running CHKDSK - automatic Fixing. Do you know of a better Data Recovery Program than EaseUs Data Recovery wizard - According to my experience I haven't found better!? Thanks

    Read the article

  • Un-do Windows disk convert HFS+

    - by BLAKE
    Last night, a friend asked my to give him a copy of a word document. He handed me an external hard drive and left. I plugged the hard drive into my file server running Windows Server 2003, opened disk management and clicked OK. (I know that in Windows 2003 you need to manually assign a drive letter to external drives.) I then looked at the drive in disk management and it said that it was unallocated space. I called my friend and he said that there was data on the drive, but he used it with his Mac Book. Aperantly when I clicked OK in disk management I converted the from HFS+ file system to something else. Is there any way to undo the disk convert? I immediately removed the drive, so there was no writing to it. Windows did not format the drive, it just converted it. Is the data still there? All the data recovery programs I have are for windows, can they read the Mac file system? I need to get the data back, what can I do?

    Read the article

  • Un-do Windows disk convert HFS+

    - by BLAKE
    Last night, a friend asked my to give him a copy of a word document. He handed me an external hard drive and left. I plugged the hard drive into my file server running Windows Server 2003, opened disk management and clicked OK. (I know that in Windows 2003 you need to manually assign a drive letter to external drives.) I then looked at the drive in disk management and it said that it was unallocated space. I called my friend and he said that there was data on the drive, but he used it with his Mac Book. Aperantly when I clicked OK in disk management I converted the from HFS+ file system to something else. Is there any way to undo the disk convert? I immediately removed the drive, so there was no writing to it. Windows did not format the drive, it just converted it. Is the data still there? All the data recovery programs I have are for windows, can they read the Mac file system? I need to get the data back, what can I do?

    Read the article

  • how do I resolve "user isn't assigned to any management roles" error in Exchange 2010 EMC?

    - by TheoJones
    Newly installed Exchange 2010 box (technically, a partially installed box, as this error is preventing me from completing the install). When I launch EMC or the Management Powershell, I get this error: VERBOSE: Connecting to myserver.mydomain.internal [myserver.mydomain.internal] Processing data from remote server failed with the following error message: The user "mydomain\administrator" isn't assigned to any management roles. For more information, see the about_Remote_Troubleshooting Help topic. Failed to connect to any Exchange Server in the current site. Thing is.. The logged in administrator account (confirmed using 'whoami') is a member of the following groups: Administrators Delegated Setup Discovery Management Domain Admins Domain Users Enterprise Admins Exchange Organization Administrators GPO Creator Owners Organization Management Schema Admins Server Management Any ideas? how can I get past this?

    Read the article

  • postfix error: open database /var/lib/mailman/data/aliases.db: No such file

    - by Thufir
    In trying to follow the Ubuntu guide for postfix and mailman, I do not understand these directions: This build of mailman runs as list. It must have permission to read /etc/aliases and read and write /var/lib/mailman/data/aliases. Do this with these commands: sudo chown root:list /var/lib/mailman/data/aliases sudo chown root:list /etc/aliases Save and run: sudo newaliases I'm getting this kind of error: root@dur:~# root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 01:16:43 dur postfix/master[19444]: terminating on signal 15 Aug 28 01:16:43 dur postfix/postfix-script[19558]: starting the Postfix mail system Aug 28 01:16:43 dur postfix/master[19559]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 01:16:45 dur postfix/postfix-script[19568]: stopping the Postfix mail system Aug 28 01:16:45 dur postfix/master[19559]: terminating on signal 15 Aug 28 01:16:45 dur postfix/postfix-script[19673]: starting the Postfix mail system Aug 28 01:16:45 dur postfix/master[19674]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 01:17:22 dur postfix/smtpd[19709]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 01:17:22 dur postfix/smtpd[19709]: connect from localhost[127.0.0.1] Aug 28 01:18:37 dur postfix/smtpd[19709]: disconnect from localhost[127.0.0.1] root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# And am wondering what connection might be. I do see that I don't have the requisite files: root@dur:~# root@dur:~# ll /var/lib/mailman/data/aliases ls: cannot access /var/lib/mailman/data/aliases: No such file or directory root@dur:~# At what stage were those aliases created? How can I create them? Is that what's causing the error error: open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 01:17:22 dur postfix/smtpd[19709]: connect from localhost[127.0.0.1]?

    Read the article

  • SQLAlchemy session management in long-running process

    - by codeape
    Scenario: A .NET-based application server (Wonderware IAS/System Platform) hosts automation objects that communicate with various equipment on the factory floor. CPython is hosted inside this application server (using Python for .NET). The automation objects have scripting functionality built-in (using a custom, .NET-based language). These scripts call Python functions. The Python functions are part of a system to track Work-In-Progress on the factory floor. The purpose of the system is to track the produced widgets along the process, ensure that the widgets go through the process in the correct order, and check that certain conditions are met along the process. The widget production history and widget state is stored in a relational database, this is where SQLAlchemy plays its part. For example, when a widget passes a scanner, the automation software triggers the following script (written in the application server's custom scripting language): ' wiget_id and scanner_id provided by automation object ' ExecFunction() takes care of calling a CPython function retval = ExecFunction("WidgetScanned", widget_id, scanner_id); ' if the python function raises an Exception, ErrorOccured will be true ' in this case, any errors should cause the production line to stop. if (retval.ErrorOccured) then ProductionLine.Running = False; InformationBoard.DisplayText = "ERROR: " + retval.Exception.Message; InformationBoard.SoundAlarm = True end if; The script calls the WidgetScanned python function: # pywip/functions.py from pywip.database import session from pywip.model import Widget, WidgetHistoryItem from pywip import validation, StatusMessage from datetime import datetime def WidgetScanned(widget_id, scanner_id): widget = session.query(Widget).get(widget_id) validation.validate_widget_passed_scanner(widget, scanner) # raises exception on error widget.history.append(WidgetHistoryItem(timestamp=datetime.now(), action=u"SCANNED", scanner_id=scanner_id)) widget.last_scanner = scanner_id widget.last_update = datetime.now() return StatusMessage("OK") # ... there are a dozen similar functions My question is: How do I best manage SQLAlchemy sessions in this scenario? The application server is a long-running process, typically running months between restarts. The application server is single-threaded. Currently, I do it the following way: I apply a decorator to the functions I make avaliable to the application server: # pywip/iasfunctions.py from pywip import functions def ias_session_handling(func): def _ias_session_handling(*args, **kwargs): try: retval = func(*args, **kwargs) session.commit() return retval except: session.rollback() raise return _ias_session_handling # ... actually I populate this module with decorated versions of all the functions in pywip.functions dynamically WidgetScanned = ias_session_handling(functions.WidgetScanned) Question: Is the decorator above suitable for handling sessions in a long-running process? Should I call session.remove()? The SQLAlchemy session object is a scoped session: # pywip/database.py from sqlalchemy.orm import scoped_session, sessionmaker session = scoped_session(sessionmaker()) I want to keep the session management out of the basic functions. For two reasons: There is another family of functions, sequence functions. The sequence functions call several of the basic functions. One sequence function should equal one database transaction. I need to be able to use the library from other environments. a) From a TurboGears web application. In that case, session management is done by TurboGears. b) From an IPython shell. In that case, commit/rollback will be explicit. (I am truly sorry for the long question. But I felt I needed to explain the scenario. Perhaps not necessary?)

    Read the article

  • Need help with a possible memory management problem(leak) regarding NSMutableArray

    - by user309030
    Hi, I'm a beginner level programmer trying to make a game app for the iphone and I've encountered a possible issue with the memory management (exc_bad_access) of my program so far. I've searched and read dozens of articles regarding memory management (including apple's docs) but I still can't figure out what exactly is wrong with my codes. So I would really appreciate it if someone can help clear up the mess I made for myself. - (void)viewDidLoad { [super viewDidLoad]; self.gameState = gameStatePaused; fencePoleArray = [[NSMutableArray alloc] init]; fencePoleImageArray = [[NSMutableArray alloc] init]; fenceImageArray = [[NSMutableArray alloc] init]; mainField = CGRectMake(10, 35, 310, 340); .......... [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(gameLoop) userInfo:nil repeats:YES]; } So basically, the player touches the screen to set up the fences/poles -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { if(.......) { ....... } else { UITouch *touch = [[event allTouches] anyObject]; currentTapLoc = [touch locationInView:touch.view]; NSLog(@"%i, %i", (int)currentTapLoc.x, (int)currentTapLoc.y); if(CGRectContainsPoint(mainField, currentTapLoc)) { if([self checkFence]) { onFencePole++; //this 3 set functions adds their respective objects into the 3 NSMutableArrays using addObject: [self setFencePole]; [self setFenceImage]; [self setFencePoleImage]; ....... } } else { ....... } } } } The setFence function (setFenceImage and setFencePoleImage is similar to this) -(void)setFencePole { Fence *fencePole; if (!elecFence) { fencePole = [[Fence alloc] initFence:onFencePole fenceType:1 fencePos:currentTapLoc]; } else { fencePole = [[Fence alloc] initFence:onFencePole fenceType:2 fencePos:currentTapLoc]; } [fencePoleArray addObject:fencePole]; [fencePole release]; and whenever I press a button in the game, endOpenState is called to clear away all the extra images(fence/poles) on the screen and also to remove all existing objects in the 3 NSMutableArray -(void)endOpenState { ........ int xMax = [fencePoleArray count]; int yMax = [fenceImageArray count]; for (int x = 0; x < xMax; x++) { [[fencePoleImageArray objectAtIndex:x] removeFromSuperview]; } for (int y = 0; y < yMax; y++) { [[fenceImageArray objectAtIndex:y] removeFromSuperview]; } [fencePoleArray removeAllObjects]; [fencePoleImageArray removeAllObjects]; [fenceImageArray removeAllObjects]; ........ } The crash happens here at the checkFence function. -(BOOL)checkFence { if (onFencePole == 0) { return YES; } else if (onFencePole >= 1 && onFencePole < currentMaxFencePole - 1) { CGPoint tempPoint1 = currentTapLoc; CGPoint tempPoint2 = [[fencePoleArray objectAtIndex:onFencePole-1] returnPos]; // the crash happens at this line if ([self checkDistance:tempPoint1 point2:tempPoint2]) { return YES; } else { return NO; } } else if (onFencePole == currentMaxFencePole - 1) { ...... } else { return NO; } } What I'm thinking of is that fencePoleArray got messed up when I used [fencePoleArray removeAllObjects] because it doesn't crash when I comment it out. It would really be great if someone can explain to me what went wrong. And thanks in advance.

    Read the article

  • T-SQL Improvements And Data Types in ms sql 2008

    - by Aamir Hasan
     Microsoft SQL Server 2008 is a new version released in the first half of 2008 introducing new properties and capabilities to SQL Server product family. All these new and enhanced capabilities can be defined as the classic words like secure, reliable, scalable and manageable. SQL Server 2008 is secure. It is reliable. SQL2008 is scalable and is more manageable when compared to previous releases. Now we will have a look at the features that are making MS SQL Server 2008 more secure, more reliable, more scalable, etc. in details.Microsoft SQL Server 2008 provides T-SQL enhancements that improve performance and reliability. Itzik discusses composable DML, the ability to declare and initialize variables in the same statement, compound assignment operators, and more reliable object dependency information. Table-Valued ParametersInserts into structures with 1-N cardinality problematicOne order -> N order line items"N" is variable and can be largeDon't want to force a new order for every 20 line itemsOne database round-trip / line item slows things downNo ARRAY data type in SQL ServerXML composition/decomposition used as an alternativeTable-valued parameters solve this problemTable-Valued ParametersSQL Server has table variablesDECLARE @t TABLE (id int);SQL Server 2008 adds strongly typed table variablesCREATE TYPE mytab AS TABLE (id int);DECLARE @t mytab;Parameters must use strongly typed table variables Table Variables are Input OnlyDeclare and initialize TABLE variable  DECLARE @t mytab;  INSERT @t VALUES (1), (2), (3);  EXEC myproc @t;Procedure must declare variable READONLY  CREATE PROCEDURE usetable (    @t mytab READONLY ...)  AS    INSERT INTO lineitems SELECT * FROM @t;    UPDATE @t SET... -- no!T-SQL Syntax EnhancementsSingle statement declare and initialize  DECLARE @iint = 4;Compound Assignment Operators  SET @i += 1;Row constructors  DECLARE @t TABLE (id int, name varchar(20));  INSERT INTO @t VALUES    (1, 'Fred'), (2, 'Jim'), (3, 'Sue');Grouping SetsGrouping Sets allow multiple GROUP BY clauses in a single SQL statementMultiple, arbitrary, sets of subtotalsSingle read pass for performanceNested subtotals provide ever better performanceGrouping Sets are an ANSI-standardCOMPUTE BY is deprecatedGROUPING SETS, ROLLUP, CUBESQL Server 2008 - ANSI-syntax ROLLUP and CUBEPre-2008 non-ANSI syntax is deprecatedWITH ROLLUP produces n+1 different groupings of datawhere n is the number of columns in GROUP BYWITH CUBE produces 2^n different groupingswhere n is the number of columns in GROUP BYGROUPING SETS provide a "halfway measure"Just the number of different groupings you needGrouping Sets are visible in query planGROUPING_ID and GROUPINGGrouping Sets can produce non-homogeneous setsGrouping set includes NULL values for group membersNeed to distinguish by grouping and NULL valuesGROUPING (column expression) returns 0 or 1Is this a group based on column expr. or NULL value?GROUPING_ID (a,b,c) is a bitmaskGROUPING_ID bits are set based on column expressions a, b, and cMERGE StatementMultiple set operations in a single SQL statementUses multiple sets as inputMERGE target USING source ON ...Operations can be INSERT, UPDATE, DELETEOperations based onWHEN MATCHEDWHEN NOT MATCHED [BY TARGET] WHEN NOT MATCHED [BY SOURCE]More on MERGEMERGE statement can reference a $action columnUsed when MERGE used with OUTPUT clauseMultiple WHEN clauses possible For MATCHED and NOT MATCHED BY SOURCEOnly one WHEN clause for NOT MATCHED BY TARGETMERGE can be used with any table sourceA MERGE statement causes triggers to be fired onceRows affected includes total rows affected by all clausesMERGE PerformanceMERGE statement is transactionalNo explicit transaction requiredOne Pass Through TablesAt most a full outer joinMatching rows = when matchedLeft-outer join rows = when not matched by targetRight-outer join rows = when not matched by sourceMERGE and DeterminismUPDATE using a JOIN is non-deterministicIf more than one row in source matches ON clause, either/any row can be used for the UPDATEMERGE is deterministicIf more than one row in source matches ON clause, its an errorKeeping Track of DependenciesNew dependency views replace sp_dependsViews are kept in sync as changes occursys.dm_sql_referenced_entitiesLists all named entities that an object referencesExample: which objects does this stored procedure use?sys.dm_sql_referencing_entities 

    Read the article

  • Grouping data in LINQ with the help of group keyword

    - by vik20000in
    While working with any kind of advanced query grouping is a very important factor. Grouping helps in executing special function like sum, max average etc to be performed on certain groups of data inside the date result set. Grouping is done with the help of the Group method. Below is an example of the basic group functionality.     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };         var numberGroups =         from num in numbers         group num by num % 5 into numGroup         select new { Remainder = numGroup.Key, Numbers = numGroup };  In the above example we have grouped the values based on the reminder left over when divided by 5. First we are grouping the values based on the reminder when divided by 5 into the numgroup variable.  numGroup.Key gives the value of the key on which the grouping has been applied. And the numGroup itself contains all the records that are contained in that group. Below is another example to explain the same. string[] words = { "blueberry", "abacus", "banana", "apple", "cheese" };         var wordGroups =         from num in words         group num by num[0] into grp         select new { FirstLetter = grp.Key, Words = grp }; In the above example we are grouping the value with the first character of the string (num[0]). Just like the order operator the group by clause also allows us to write our own logic for the Equal comparison (That means we can group Item by ignoring case also by writing out own implementation). For this we need to pass an object that implements the IEqualityComparer<string> interface. Below is an example. public class AnagramEqualityComparer : IEqualityComparer<string> {     public bool Equals(string x, string y) {         return getCanonicalString(x) == getCanonicalString(y);     }      public int GetHashCode(string obj) {         return getCanonicalString(obj).GetHashCode();     }         private string getCanonicalString(string word) {         char[] wordChars = word.ToCharArray();         Array.Sort<char>(wordChars);         return new string(wordChars);     } }  string[] anagrams = {"from   ", " salt", " earn", "  last   ", " near "}; var orderGroups = anagrams.GroupBy(w => w.Trim(), new AnagramEqualityComparer()); Vikram  

    Read the article

  • Display particular data into a file

    - by Avinash K G
    I'm new to Ubuntu and have been using it for a couple of weeks now. Recently I encountered a problem where in I had to display a particular data on to a file. Here is the output displayed on the terminal. Potential vulnerability found (CVE-2009-4028) CVSS Score is 6.8 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2009-4030) CVSS Score is 4.4 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2009-5026) CVSS Score is 6.8 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0075) CVSS Score is 1.7 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0087) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0101) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0102) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0112) CVSS Score is 3.5 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0113) CVSS Score is 5.5 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0114) CVSS Score is 3.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0115) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0116) CVSS Score is 4.9 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0118) CVSS Score is 4.9 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0119) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0120) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0484) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0485) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0490) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0492) CVSS Score is 2.1 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0540) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0553) CVSS Score is 7.5 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0574) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2012-0583) CVSS Score is 4.0 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2013-1492) CVSS Score is 7.5 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2013-1506) CVSS Score is 2.8 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) Potential vulnerability found (CVE-2013-1521) CVSS Score is 6.5 Full vulnerability match (incl. edition/language) File "/usr/sbin/mysqld" (CPE = cpe:/a:mysql:mysql:5.1:::) on host glynis-desktop (key glynis-desktop) I intend to display the Potential vulnerability found field and the corresponding score alone. There seems to be about 9995 entries and I would like to display all of them. I have been using this command as of now awk '/CVSS Score is/ < /Potential vulnerability found/' output.txt but this seems to display only the name of the vulnerability or the score. How do I display this in file(text,excel) such that all the vulnerability and the corresponding score willbe displayed. Any help would be appreciated Thank you.

    Read the article

  • Wipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy Way

    - by The Geek
    Giving a computer to somebody else? Maybe you’re putting it out on Craigslist to sell to a stranger—either way, you’ll want to make sure that your drive is completely wiped, scrubbed, and clean of any personal data. Here’s the easy way to do it. If you only have access to an Ubuntu Live CD or thumb drive, you can actually use that instead if you prefer, and we’ve got you covered with a full guide to securely wiping your PC’s hard drive. Otherwise, keep reading. Wipe the Drive with DBAN Darik’s Boot and Nuke CD is the easiest way to permanently and totally destroy every bit of personal information on that drive—nobody is going to recover a thing once this is done. The first thing you’ll need to do is download a copy of the ISO image, and then burn it to a blank CD with something really useful like Imgburn. Just choose Burn image to Disc at the start screen, select the little file icon, grab the downloaded ISO, and then go. If you need a little more help, we’ve got you covered with a beginner’s guide to burning an ISO image. Once you’re done, stick the disc into the drive, start the PC up, and then once you boot to the DBAN prompt you’ll see a menu. You can pretty much ignore everything on here, and just type… autonuke And there you are, your disk is now being securely wiped. Once it’s all done, you can remove the CD, and then either pack the PC up to sell, or re-install Windows on there if you feel like it. More Advanced Method If you’re really paranoid, want to run a different type of wipe, or just like fiddling with the options, you can choose F3 or hit Enter at the prompt to head to the advanced selection screen. Here you can choose exactly which drive to wipe, or hit the M key to change the method. You’ll be able to choose between a bunch of different wipe options. The Quick Erase is all you really need though.   So there you are, easy PC wiping in one package. What about you? Do you make sure to wipe your old PCs before giving them away? Personally I’ve always just yanked out the hard drives before I got rid of an old PC, but that’s just me. Download DBAN from dban.org Similar Articles Productive Geek Tips Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveHow to Dispose of Old Computers ResponsiblyHow To Delete a VHD in Windows 7Speed up External USB Hard Drives in Windows VistaSpeed Up SATA Hard Drives in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Code Structure / Level Design: Plants vs Zombies game level dissection

    - by lalan
    Hi Friends, I am interested in learning the class structure of Plants vs Zombies, particularly level design; for those who haven't played it - this video contains nice play-through: http://www.youtube.com/watch?v=89DfdOIJ4xw. How would I go ahead and design the code, mostly structure & classes, which allows for maximum flexibility & clean development? I am familiar with data driven design concepts, and would use events to handle most of dynamic behavior. Dissection at macro level: (Once every Level) Load tilemap, props, etc -- basically build the map (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) Show Enemies you'll face during present level (Once every Level) Unit Selection Window/Panel - selection of defensive plants (Once every Level) Camera Movement - might consider it as short cut-scene (Once every Level) HUD Creation - based on unit selection (Level Loop) Enemy creation - based on types of zombies allowed (Level Loop) Sun/Resource generation (Level Loop) Show messages like 'huge wave of zombies coming', 'final wave' (Level Loop) Other unique events - Spawn gifts, money, tombstones, etc (Once every Level) Unlock new plant Potential game scripts: a) Level definitions: Level_1_1.xml, Level_1_2.xml, etc. Level_1_1.xml :: Sample script <map> <tilemap>tilemapFrontLawn</tilemap> <SpawnPoints> tiles where particular type of zombies (land vs water) may spawn</spawnPoints> <props> position, entity array -- lawnmower, </props> </map> <zombies> <... list of zombies who gonna attack by ids...> </zombies> <plants> <... list by plants which are available for defense by ids...> </plants> <progression> <ZombieWave name='first wave' spawnScript='zombieLightWave.lua' unlock='null'> <startMessages time=1.5>Ready</startMessages> <endMessages time=1.5>Huge wave of zombies incoming</endMessages> </ZombieWave> </progression> b) Entities definitions: .xmls containing zombies, plants, sun, lawnmower, coins, etc description. Potential classes: //LevelManager - Based on the level under play, it will load level script. Few of the // functions it may have: class LevelManager { public: bool load(string levelFileName); bool enter(); bool update(float deltatime); bool exit(); private: LevelData* mLevelData; } // LevelData - Contains the details of level loaded by LevelManager. class LevelData { private: string file; // array of camera,dialog,attackwaves, etc in active level LevelCutSceneCamera** mArrayCutSceneCamera; LevelCutSceneDialog** mArrayCutSceneDialog; LevelAttackWave** mArrayAttackWave; .... // which camera,dialog,attackwave is active in level uint mCursorCutSceneCamera; uint mCursorCutSceneDialog; uint mCursorAttackWave; public: // based on cursor, get the next camera,dialog,attackwave,etc in active level // return false/true based on failure/success bool nextCutSceneCamera(LevelCutSceneCamera**); bool nextCutSceneDialog(LevelCutSceneDialog**); } // LevelUnderPlay- LevelManager class LevelUnderPlay { private: LevelCutSceneCamera* mCutSceneCamera; LevelCutSceneDialog* mCutSceneDialog; LevelAttackWave* mAttackWave; Entities** mSelectedPlants; Entities** mAllowedZombies; bool isCutSceneCameraActive; public: bool enter(); bool update(float deltatime); bool exit(); } I am totally confused.. :( Does it make sense of using class composition (have flat class hierarchy) for managing levels. Is it a good idea to just add/remove/update sprites (or any drawable stuff) to current scene from LevelManager or LevelUnderPlay? If I want to make non-linear level design, how should I go ahead? Perhaps I would need a LevelProgression class, which would decide what to do based on decision tree. Any suggestions would be appreciated very much. Thank for your time, lalan

    Read the article

  • Finding the maximum value/date across columns

    - by AtulThakor
    While working on some code recently I discovered a neat little trick to find the maximum value across several columns….. So the starting point was finding the maximum date across several related tables and storing the maximum value against an aggregated record. Here's the sample setup code: USE TEMPDB IF OBJECT_ID('CUSTOMER') IS NOT NULL BEGIN DROP TABLE CUSTOMER END IF OBJECT_ID('ADDRESS') IS NOT NULL BEGIN DROP TABLE ADDRESS END IF OBJECT_ID('ORDERS') IS NOT NULL BEGIN DROP TABLE ORDERS END SELECT 1 AS CUSTOMERID, 'FREDDY KRUEGER' AS NAME, GETDATE() - 10 AS DATEUPDATED INTO CUSTOMER SELECT 100000 AS ADDRESSID, 1 AS CUSTOMERID, '1428 ELM STREET' AS ADDRESS, GETDATE() -5 AS DATEUPDATED INTO ADDRESS SELECT 123456 AS ORDERID, 1 AS CUSTOMERID, GETDATE() + 1 AS DATEUPDATED INTO ORDERS .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Now the code used a function to determine the maximum date, this performed poorly. After considering pivoting the data I opted for a case statement, this seemed reasonable until I discovered other areas which needed to determine the maximum date between 5 or more tables which didn't scale well. The final solution involved using the value clause within a sub query as followed. SELECT C.CUSTOMERID, A.ADDRESSID, (SELECT MAX(DT) FROM (Values(C.DATEUPDATED),(A.DATEUPDATED),(O.DATEUPDATED)) AS VALUE(DT)) FROM CUSTOMER C INNER JOIN ADDRESS A ON C.CUSTOMERID = A.CUSTOMERID INNER JOIN ORDERS O ON O.CUSTOMERID = C.CUSTOMERID .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see the solution scales well and can take advantage of many of the aggregate functions!

    Read the article

  • export web page data to excel using javascript [on hold]

    - by Sreevani sri
    I have created web page using html.When i clicked on submit button it will export to excel. using javascript i wnt to export thadt data to excel. my html code is 1. Please give your Name:<input type="text" name="Name" /><br /> 2. Area where you reside:<input type="text" name="Res" /><br /> 3. Specify your age group<br /> (a)15-25<input type="text" name="age" /> (b)26-35<input type="text" name="age" /> (c)36-45<input type="text" name="age" /> (d) Above 46<input type="text" name="age" /><br /> 4. Specify your occupation<br /> (a) Student<input type="checkbox" name="occ" value="student" /> (b) Home maker<input type="checkbox" name="occ" value="home" /> (c) Employee<input type="checkbox" name="occ" value="emp" /> (d) Businesswoman <input type="checkbox" name="occ" value="buss" /> (e) Retired<input type="checkbox" name="occ" value="retired" /> (f) others (please specify)<input type="text" name="others" /><br /> 5. Specify the nature of your family<br /> (a) Joint family<input type="checkbox" name="family" value="jfamily" /> (b) Nuclear family<input type="checkbox" name="family" value="nfamily" /><br /> 6. Please give the Number of female members in your family and their average age approximately<br /> Members Age 1 2 3 4 5<br /> 8. Please give your highest level of education (a)SSC or below<input type="checkbox" name="edu" value="ssc" /> (b) Intermediate<input type="checkbox" name="edu" value="int" /> (c) Diploma <input type="checkbox" name="edu" value="dip" /> (d)UG degree <input type="checkbox" name="edu" value="deg" /> (e) PG <input type="checkbox" name="edu" value="pg" /> (g) Doctorial degree<input type="checkbox" name="edu" value="doc" /><br /> 9. Specify your monthly income approximately in RS <input type="text" name="income" /><br /> 10. Specify your time spent in making a purchase decision at the outlet<br /> (a)0-15 min <input type="checkbox" name="dis" value="0-15 min" /> (b)16-30 min <input type="checkbox" name="dis" value="16-30 min" /> (c) 30-45 min<input type="checkbox" name="dis" value="30-45 min" /> (d) 46-60 min<input type="checkbox" name="dis" value="46-60 min" /><br /> <input type="submit" onclick="exportToExcel()" value="Submit" /> </div> </form>

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Generically correcting data before save with Entity Framework

    - by koevoeter
    Been working with Entity Framework (.NET 4.0) for a week now for a data migration job and needed some code that generically corrects string values in the database. You probably also have seen things like empty strings instead of NULL or non-trimmed texts ("United States       ") in "old" databases, and you don't want to apply a correcting function on every column you migrate. Here's how I've done this (extending the partial class of my ObjectContext):public partial class MyDatacontext{    partial void OnContextCreated()    {        SavingChanges += OnSavingChanges;    }     private void OnSavingChanges(object sender, EventArgs e)    {        foreach (var entity in GetPersistingEntities(sender))        {            foreach (var propertyInfo in GetStringProperties(entity))            {                var value = (string)propertyInfo.GetValue(entity, null);                 if (value == null)                {                    continue;                }                 if (value.Trim().Length == 0 && IsNullable(propertyInfo))                {                    propertyInfo.SetValue(entity, null, null);                }                else if (value != value.Trim())                {                    propertyInfo.SetValue(entity, value.Trim(), null);                }            }        }    }     private IEnumerable<object> GetPersistingEntities(object sender)    {        return ((ObjectContext)sender).ObjectStateManager            .GetObjectStateEntries(EntityState.Added | EntityState.Modified)             .Select(e => e.Entity);    }    private IEnumerable<PropertyInfo> GetStringProperties(object entity)    {        return entity.GetType().GetProperties()            .Where(pi => pi.PropertyType == typeof(string));    }    private bool IsNullable(PropertyInfo propertyInfo)    {        return ((EdmScalarPropertyAttribute)propertyInfo             .GetCustomAttributes(typeof(EdmScalarPropertyAttribute), false)            .Single()).IsNullable;    }}   Obviously you can use similar code for other generic corrections.

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >