Search Results

Search found 784 results on 32 pages for 'chars'.

Page 29/32 | < Previous Page | 25 26 27 28 29 30 31 32  | Next Page >

  • Textbox time validations (javascript)

    - by unos
    I have a textbox in which user can enter time (eg- 01:00) and also a drop down box for entering AM/PM fields. (Since the AM/PM field is used, 12-hour time format is used.) The text box allows a max entry of 5 chars only (eg - 01:00). Pls let me know how I can set the 3rd char as a default -colon : , so that the user simply has to enter only the time. How to check if the time entered by the user is numeric or not?. Autocompletion feature : for eg, if user enters 1 then it would automatically be set to 01:00 Javascript Validations for 12-hour format. Eg: if user enters 13:00 then it should change to 01:00 How can I append the text box time values with the am/pm value selected in the drop down box?. Once the values are appended, automatically populate another text box (text box 2) with the result. Eg: 01:00 + pm should be set as 01:00p in the new text box (text box 2). Any help would be appreciated.

    Read the article

  • Python Coding standards vs. productivity

    - by Shroatmeister
    I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)

    Read the article

  • JDK 7u25: Solutions to Issues caused by changes to Runtime.exec

    - by Devika Gollapudi
    The following examples were prepared by Java engineering for the benefit of Java developers who may have faced issues with Runtime.exec on the Windows platform. Background In JDK 7u21, the decoding of command strings specified to Runtime.exec(String), Runtime.exec(String,String[]) and Runtime.exec(String,String[],File) methods, has been made more strict. See JDK 7u21 Release Notes for more information. This caused several issues for applications. The following section describes some of the problems faced by developers and their solutions. Note: In JDK 7u25, the system property jdk.lang.Process.allowAmbigousCommands can be used to relax the checking process and helps as a workaround for some applications that cannot be changed. The workaround is only effective for applications that are run without a SecurityManager. See JDK 7u25 Release Notes for more information. Note: To understand the details of the Windows API CreateProcess call, see: http://msdn.microsoft.com/en-us/library/windows/desktop/ms682425%28v=vs.85%29.aspx There are two forms of Runtime.exec calls: with the command as string: "Runtime.exec(String command[, ...])" with the command as string array: "Runtime.exec(String[] cmdarray [, ...] )" The issues described in this section relate to the first form of call. With the first call form, developers expect the command to be passed "as is" to Windows where the command needs be split into its executable name and arguments parts first. But, in accordance with Java API, the command argument is split into executable name and arguments by spaces. Problem 1: "The file path for the command includes spaces" In the call: Runtime.getRuntime().exec("c:\\Program Files\\do.exe") the argument is split by spaces to an array of strings as: c:\\Program, Files\\do.exe The first element of parsed array is interpreted as the executable name, verified by SecurityManager (if present) and surrounded by quotations to avoid ambiguity in executable path. This results in the wrong command: "c:\\Program" "Files\\do.exe" which will fail. Solution: Use the ProcessBuilder class, or the Runtime.exec(String[] cmdarray [, ...] ) call, or quote the executable path. Where it is not possible to change the application code and where a SecurityManager is not used, the Java property jdk.lang.Process.allowAmbigousCommands could be used by setting its value to "true" from the command line: -Djdk.lang.Process.allowAmbigousCommands=true This will relax the checking process to allow ambiguous input. Examples: new ProcessBuilder("c:\\Program Files\\do.exe").start() Runtime.getRuntime().exec(new String[]{"c:\\Program Files\\do.exe"}) Runtime.getRuntime().exec("\"c:\\Program Files\\do.exe\"") Problem 2: "Shell command/.bat/.cmd IO redirection" The following implicit cmd.exe calls: Runtime.getRuntime().exec("dir temp.txt") new ProcessBuilder("foo.bat", "", "temp.txt").start() Runtime.getRuntime().exec(new String[]{"foo.cmd", "", "temp.txt"}) lead to the wrong command: "XXXX" "" temp.txt Solution: To specify the command correctly, use the following options: Runtime.getRuntime().exec("cmd /C \"dir temp.txt\"") new ProcessBuilder("cmd", "/C", "foo.bat temp.txt").start() Runtime.getRuntime().exec(new String[]{"cmd", "/C", "foo.cmd temp.txt"}) or Process p = new ProcessBuilder("cmd", "/C" "XXX").redirectOutput(new File("temp.txt")).start(); Problem 3: "Group execution of shell command and/or .bat/.cmd files" Due to enforced verification procedure, arguments in the following calls create the wrong commands.: Runtime.getRuntime().exec("first.bat && second.bat") new ProcessBuilder("dir", "&&", "second.bat").start() Runtime.getRuntime().exec(new String[]{"dir", "|", "more"}) Solution: To specify the command correctly, use the following options: Runtime.exec("cmd /C \"first.bat && second.bat\"") new ProcessBuilder("cmd", "/C", "dir && second.bat").start() Runtime.exec(new String[]{"cmd", "/C", "dir | more"}) The same scenario also works for the "&", "||", "^" operators of the cmd.exe shell. Problem 4: ".bat/.cmd with special DOS chars in quoted params” Due to enforced verification, arguments in the following calls will cause exceptions to be thrown.: Runtime.getRuntime().exec("log.bat \"error new ProcessBuilder("log.bat", "error Runtime.getRuntime().exec(new String[]{"log.bat", "error Solution: To specify the command correctly, use the following options: Runtime.getRuntime().exec("cmd /C log.bat \"error new ProcessBuilder("cmd", "/C", "log.bat", "error Runtime.getRuntime().exec(new String[]{"cmd", "/C", "log.bat", "error Examples: Complicated redirection for shell construction: cmd /c dir /b C:\ "my lovely spaces.txt" becomes Runtime.getRuntime().exec(new String[]{"cmd", "/C", "dir \b \"my lovely spaces.txt\"" }); The Golden Rule: In most cases, cmd.exe has two arguments: "/C" and the command for interpretation.

    Read the article

  • gmail dkim=neutral (no signature)

    - by Bretticus
    After testing much and retracing my steps, I still cannot get google mail to validate. My mail server is Debian 5.0 with exim Exim version 4.72 #1 built 31-Jul-2010 08:12:17 Copyright (c) University of Cambridge, 1995 - 2007 Berkeley DB: Berkeley DB 4.8.24: (August 14, 2009) Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc GnuTLS move_frozen_messages Content_Scanning DKIM Old_Demime Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz dnsdb dsearch ldap ldapdn ldapm mysql nis nis0 passwd pgsql sqlite Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa Routers: accept dnslookup ipliteral iplookup manualroute queryprogram redirect Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp Fixed never_users: 0 Size of off_t: 8 GnuTLS compile-time version: 2.4.2 GnuTLS runtime version: 2.4.2 Configuration file is /var/lib/exim4/config.autogenerated My remote smtp transport configuration: remote_smtp: debug_print = "T: remote_smtp for $local_part@$domain" driver = smtp helo_data = mailer.mydomain.com dkim_domain = mydomain.com dkim_selector = mailer dkim_private_key = /etc/exim4/dkim/mailer.mydomain.com.key dkim_canon = relaxed .ifdef REMOTE_SMTP_HOSTS_AVOID_TLS hosts_avoid_tls = REMOTE_SMTP_HOSTS_AVOID_TLS .endif .ifdef REMOTE_SMTP_HEADERS_REWRITE headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE .endif .ifdef REMOTE_SMTP_RETURN_PATH return_path = REMOTE_SMTP_RETURN_PATH .endif .ifdef REMOTE_SMTP_HELO_FROM_DNS helo_data=REMOTE_SMTP_HELO_DATA .endif The path to my private key is correct. I see a DKIM header in my messages as they end up in my gmail account: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mydomain.com; s=mailer; h=Content-Type:MIME-Version:Message-ID:Date:Subject:Reply-To:To:From; bh=nKgQAFyGv<snip>tg=; b=m84lyYvX6<snip>RBBqmW52m1ce2g=; However, gmail headers always report dkim=neutral (no signature): dkim=neutral (no signature) [email protected] My DNS results: dig +short txt mailer._domainkey.mydomain.com mailer._domainkey. mydomain.com descriptive text "v=DKIM1\; k=rsa\; t=y\; p=LS0tLS1CRUdJ<snip>M0RRRUJBUVV" "BQTRHTkFEQ0J<snip>GdLamdaaG" "JwaFZkai93b3<snip>laSCtCYmdsYlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21LNFhaR0NjeENMR0FmOWI4Z<snip>tLQo=" Note that the base64 public key is 364 chars long so I had to break up the key using bind9. $ORIGIN _domainkey. mydomain.com. mailer TXT ("v=DKIM1; k=rsa; t=y; p=LS0tLS1CRUdJTiBQVUJM<snip>U0liM0RRRUJBUVV" "BQTRHTkFEQ0JpUUtCZ1<snip>15MGdLamdaaG" "JwaFZkai93b3lDK21MR<snip>YlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21L<snip>Ci0tLS0tRU5E" "IFBVQkxJQyBLRVktLS0tLQo=") Can anyone point me in the right direction? I would really appreciate it.

    Read the article

  • Ubuntu 12.04 crash analysis - strange binary data on all open files at the moment of crash

    - by lanbo
    A couple of hours ago we got a system crash on Ubuntu 12.04. We checked all the log files and there is nothing suspicious to blame to. Last stuff that was logged was some dovecot activity. There are no kernel panic messages. Nothing. It is a new server (new hardware) we are testing before production. And because it is new hard, I'm suspicious the problem may be due to some faulty hardware. We already run memtester with no problem detected. I'll be happy to hear from other hardware testing tools (the machine has SSD). Anyway, the thing I wanted to ask you is a different one. The strange thing is on every open file at the moment of the crash we found the next sequence of symbols was written into them: "@^@^@^@^@^@^@...". For example, on the syslog log file we got: Apr 16 15:53:56 odyssey dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<info>, method=PLAIN, rip=46.29.255.73, lip=5.9.58.177 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ [these continues for about 1000 chars...] ^@^@^@^@Apr 16 15:55:12 odyssey kernel: imklog 5.8.6, log source = /proc/kmsg started. We got all these symbols in all open files. These include: syslog, mail.log, kern.log, ... But also on some logs that are output by php scripts run in CRONs from user accounts (not root). So, any idea why all open files got these characters written during the crash? This is pretty bad since the crash corrupted many files (we don't even know which other ones may be affected). We are suspicious that all open files (in write mode maybe) at the moment of the crash got all these symbols inserted. Why is that? BTW [in case it helps], the system automatically rebooted after the crash but Apache did not start. There were not traces in /var/apache2/*log why apache did not start. After running a "service apache2 start" it started with no problems. Also, we rebooted the machine manually and Apache also started on reboot. But it did not start after the crash and no errors were reported. Thanks guys!

    Read the article

  • How do Windows 7 encrypted files look like?

    - by Sean Farrell
    Ok this is kind of an odd question: How do Windows 7 (Home Premium) encrypted files look like "from the outside"? Now here is the story. An acquaintance of a freind of mine got a nasty virus / scareware. So I wiped out my PC technician cap and went to work on it. What I did was remove the drive from the laptop and put drive into my external drive bay. I scanned the drive and yes it was loaded with stuff. That basically cured the infection and I could start the system back up. To check if it cured the problem I wanted to see the system while running. There where two user accounts, on with a password and one without (both admin users !?!). So I logged into the unprotected user and cleaned up the residual issues, like proxy server to localhost in the browser config. Now I wanted to do the same for the password protected user. What I noticed that from my system and the unprotected user account the files of the protected user looked garbled. The files are something like 12 random alphanum chars, but the folders looked ok. Naive as was thought this might be how encrypted files looked "from the outside". (I never use Microsoft's own security features, so how would I know. TrueCrypt is one big blob.) Since the second user could not be reached, I though sod it and removed the password from the account. (That might have been a mistake, I know.) Now I did the same clean up tasks and all nice and fine; except for the files which where still "encrypted". So I looked into many Windows Encrypted Files recovery posts and not all hope is lost, since I should be able to extract the certificate and with the password regain access to the files. Also note that windows did "only" prompt me that removing the password would be insecure, not that access to encrypted files would be lost, like it is claimed in most recovery articles. Resetting the password did not help and I gave up for the night. The question that nagged me half of the last night was, what if the files are not encrypted, but the scare-ware encrypted / destroyed the files? I don't want to spend hours of work trying to recover files that are not recoverable. The ting is that the user does not remember turning it on and aren't the files marked in blue and the filename is readable? Many thanks for input from users who have more knowledge about WEF...

    Read the article

  • mysql not starting

    - by Eiriks
    I have a server running on rackspace.com, it been running for about a year (collecting data for a project) and no problems. Now it seems mysql froze (could not connect either through ssh command line, remote app (sequel pro) or web (pages using the db just froze). I got a bit eager to fix this quick and rebooted the virtual server, running ubuntu 10.10. It is a small virtual LAMP server (10gig storage - I'm only using 1, 256mb ram -has not been a problem). Now after the reboot, I cannot get mysql to start again. service mysql status mysql stop/waiting I believe this just means mysql is not running. How do I get this running again? service mysql start start: Job failed to start No. Just typing 'mysql' gives: mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) There is a .sock file in this folder, 'ls -l' gives: srwxrwxrwx 1 mysql mysql 0 2012-12-01 17:20 mysqld.sock From googleing this for a while now, I see that many talk about the logfile and my.cnf. Logs Not sure witch ones I should look at. This log-file is empty: 'var/log/mysql/error.log', so is the 'var/log/mysql.err' and 'var/log/mysql.log'. my.cnf is located in '/etc/mysql' and looks like this. Can't see anything clearly wrong with it either. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ I need the data in the database (so i'd like to avoid reinstalling), and I need it back up running again. All hint, tips and solutions are welcomed and appreciated.

    Read the article

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • Part 8: How to name EBS Customizations

    - by volker.eckardt(at)oracle.com
    You might wonder why I am discussing this here. The reason is simple: nearly every project has a bit different naming conventions, which makes a the life always a bit complicated (for developers, but also setup responsible, and also for consultants).  Although we always create a document to describe the technical object naming conventions, I have rarely seen a dedicated document  with functional naming conventions. To be precisely, from my stand point, there should always be one global naming definition for an implementation! Let me discuss some related questions: What is the best convention for the customization reference? How to name database objects (tables, packages etc.)? How to name functional objects like Value Sets, Concurrent Programs, etc. How to separate customizations from standard objects best? What is the best convention for the customization reference? The customization reference is the key you use to reference your customization from other lists, from the project plan etc. Usually it is something like XXHU_CONV_22 (HU=customer abbreviation, CONV=Conversion object #22) or XXFA_DEPRN_RPT_02 (FA=Fixed Assets, DEPRN=Short object group, here depreciation, RPT=Report, 02=2nd report in this area) As this is just a reference (not an object name yet), I would prefer the second option. XX=Customization, FA=Main EBS Module linked (you may have sometimes more, but FA is the main) DEPRN_RPT=Short name to specify the customization 02=a unique number Important here is that the HU isn’t used, because XX is enough to mark a custom object, and the 3rd+4th char can be used by the EBS module short name. How to name database objects (tables, packages etc.)? I was leading different developer teams, and I know that one common way is it to take the Customization reference and add more chars behind to classify the object (like _V for view and _T1 for triggers etc.). The only concern I have with this approach is the reusability. If you name your view XXFA_DEPRN_RPT_02_V, no one will by choice reuse this nice view, as it seams to be specific for this CEMLI. My suggestion is rather to name the view XXFA_DEPRN_PERIODS_V and allow herewith reusability for other CEMLIs (although the view will be deployed primarily with CEMLI package XXFA_DEPRN_RPT_02). (check also one of the following Blogs where I will talk about deployment.) How to name Value Sets, Concurrent Programs, etc. For Value Sets I would go with the same convention as for database objects, starting with XX<Module> …. For Concurrent Programs the situation is a bit different. This “object” is seen and used by a lot of users, and they will search for. In many projects it is common to start again with the company short name, or with XX. My proposal would differ. If you have created your own report and you name it “XX: Invoice Report”, the user has to remember that this report does not start with “I”, it starts with X. Would you like typing an X if you are looking for an Invoice report? No, you wouldn’t! So my advise would be to name it:   “Invoice Report (XXAP)”. Still we know it is custom (because of the XXAP), but the end user will type the key “i” to get it (and will see similar reports starting also with “i”). I hope that the general schema behind has now become obvious. How to separate customizations from standard objects best? I would not have this section here if the naming would not play an important role. Unfortunately, we can not always link a custom application to our own object, therefore the naming is really important. In the file system structure we use our $XXyy_TOP, in JAVA_TOP it is perhaps also “xx” in front. But in the database itself? Although there are different concepts in place, still many implementations are using the standard “apps” approach, means custom objects are stored in the apps schema (which should not cause any trouble). Final advise: review the naming conventions regularly, once a month. You may have to add more! And, publish them! To summarize: Technical and functional customized objects should always follow a naming convention. This naming convention should be project wide, and only one place shall be used to maintain (like in a Wiki). If the name is for the end user, rather put a customization identifier at the end; if it is an internal name, start with XX…

    Read the article

  • reading the file name from user input in MIPS assembly

    - by Hassan Al-Jeshi
    I'm writing a MIPS assembly code that will ask the user for the file name and it will produce some statistics about the content of the file. However, when I hard code the file name into a variable from the beginning it works just fine, but when I ask the user to input the file name it does not work. after some debugging, I have discovered that the program adds 0x00 char and 0x0a char (check asciitable.com) at the end of user input in the memory and that's why it does not open the file based on the user input. anyone has any idea about how to get rid of those extra chars, or how to open the file after getting its name from the user?? here is my complete code (it is working fine except for the file name from user thing, and anybody is free to use it for any purpose he/she wants to): .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" msg2: .asciiz "Number of Uppercase Char: " msg3: .asciiz "Number of Lowercase Char: " msg4: .asciiz "Number of Decimal Char: " msg5: .asciiz "Number of Words: " nline: .asciiz "\n" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter li $t1, 0 # Uppercase counter li $t2, 0 # Lowercase counter li $t3, 0 # Decimal counter li $t4, 0 # Words counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkUpper #check for upper case jal checkLower #check for lower case jal checkDecimal #check for decimal jal checkWord #check for words addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra output: li $v0, 4 la $a0, msg2 syscall li $v0, 1 move $a0, $t1 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg3 syscall li $v0, 1 move $a0, $t2 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg4 syscall li $v0, 1 move $a0, $t3 syscall li $v0, 4 la $a0, nline syscall li $v0, 4 la $a0, msg5 syscall addi $t4, $t4, 1 li $v0, 1 move $a0, $t4 syscall jr $ra checkUpper: blt $t5, 0x41, L1 #branch if less than 'A' bgt $t5, 0x5a, L1 #branch if greater than 'Z' addi $t1, $t1, 1 #increment Uppercase counter L1: jr $ra checkLower: blt $t5, 0x61, L2 #branch if less than 'a' bgt $t5, 0x7a, L2 #branch if greater than 'z' addi $t2, $t2, 1 #increment Lowercase counter L2: jr $ra checkDecimal: blt $t5, 0x30, L3 #branch if less than '0' bgt $t5, 0x39, L3 #branch if greater than '9' addi $t3, $t3, 1 #increment Decimal counter L3: jr $ra checkWord: bne $t5, 0x20, L4 #branch if 'space' addi $t4, $t4, 1 #increment words counter L4: jr $ra fileClose: # Close the file li $v0, 16 # system call for close file move $a0, $s0 # file descriptor to close syscall # close file jr $ra Note: I'm using MARS Simulator, if that makes any different

    Read the article

  • How to automatically resize an EditText widget (with some attributes) in a TableLayout

    - by steff
    Hi everyone, I have a layout issue. What I do is this: create TableLayout in xml with zero children: <TableLayout android:id="@+id/t_layout_contents" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/l_layout_tags" android:stretchColumns="1" android:paddingLeft="5dip" android:paddingRight="5dip" /> Insert first row programmatically in onCreate(): tLayoutContents = (TableLayout)findViewById(R.id.t_layout_contents); NoteElement nr_1 = new NoteElement(this); tLayoutContents.addView(nr_1); Class "NoteElement" extends TableRow. The 1st row just consists of a blank ImageView as a placeholder and an EditText to enter text. NoteElement's constructor looks like this: public NoteElement(Context c) { super(c); this.context = c; defaultText = c.getResources().getString(R.string.create_note_help_text); imageView = new ImageView(context); imageView.setImageResource(android.R.color.transparent); LayoutParams params = new LayoutParams(0); imageView.setLayoutParams(params); addView(imageView); addView(addTextField()); } Method addTextField() specifies the attributes for the EditText widget: private EditText addTextField() { editText = new EditText(context); editText.setImeOptions(EditorInfo.IME_ACTION_DONE); editText.setMinLines(4); editText.setRawInputType(InputType.TYPE_TEXT_FLAG_MULTI_LINE); editText.setHint(R.string.create_note_et_blank_text); editText.setAutoLinkMask(Linkify.ALL); editText.setPadding(5, 0, 0, 0); editText.setGravity(Gravity.TOP); editText.setVerticalScrollBarEnabled(true); LayoutParams params = new LayoutParams(1); editText.setLayoutParams(params); return editText; } So far, so good. But my problem occurs as soon as the available space for the chars is depleted. The EditText does not resize itself but switches to a single line EditText. I am desperatly looking for a way in which the EditText resizes itself in its height dynamically, being dependant on the inserted text length. Does anyone have a hint on this? Thanks & regards, steff

    Read the article

  • Stored Procedure call with parameters in ASP.NET MVC

    - by cc0
    I have a working controller for another stored procedure in the database, but I am trying to test another. When I request the URL; http://host.com/Map?minLat=0&maxLat=50&minLng=0&maxLng=50 I get the following error message, which is understandable but I can't seem to find out why it occurs; Procedure or function 'esp_GetPlacesWithinGeoSpan' expects parameter '@MinLat', which was not supplied. This is the code I am using. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Mvc.Ajax; using System.Data; using System.Text; using System.Data.SqlClient; namespace prototype.Controllers { public class MapController : Controller { //Initial variable definitions //Array with chars to be used with the Trim() methods char[] lastComma = { ',' }; //Minimum and maximum lat/longs for queries float _minLat; float _maxLat; float _minLng; float _maxLng; //Creates stringbuilder object to store SQL results StringBuilder json = new StringBuilder(); //Defines which SQL-server to connect to, which database, and which user SqlConnection con = new SqlConnection(...connection string here...); // // HTTP-GET: /Map/ public string CallProcedure_getPlaces(float minLat, float maxLat, float minLng, float maxLng) { con.Open(); using (SqlCommand cmd = new SqlCommand("esp_GetPlacesWithinGeoSpan", con)) { cmd.CommandType = CommandType.Text; cmd.Parameters.AddWithValue("@MinLat", _minLat); cmd.Parameters.AddWithValue("@MaxLat", _maxLat); cmd.Parameters.AddWithValue("@MinLng", _minLng); cmd.Parameters.AddWithValue("@MaxLng", _maxLng); using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { json.AppendFormat("\"{0}\":{{\"c\":{1},\"f\":{2}}},", reader["PlaceID"], reader["PlaceName"], reader["SquareID"]); } } con.Close(); } return "{" + json.ToString().TrimEnd(lastComma) + "}"; } //http://host.com/Map?minLat=0&maxLat=50&minLng=0&maxLng=50 public ActionResult Index(float minLat, float maxLat, float minLng, float maxLng) { _minLat = minLat; _maxLat = maxLat; _minLng = minLng; _maxLng = maxLng; return Content(CallProcedure_getPlaces(_minLat, _maxLat, _minLng, _maxLng)); } } } Any help on resolving this problem would be greatly appreciated.

    Read the article

  • Sentiment analysis with NLTK python for sentences using sample data or webservice?

    - by Ke
    I am embarking upon a NLP project for sentiment analysis. I have successfully installed NLTK for python (seems like a great piece of software for this). However,I am having trouble understanding how it can be used to accomplish my task. Here is my task: I start with one long piece of data (lets say several hundred tweets on the subject of the UK election from their webservice) I would like to break this up into sentences (or info no longer than 100 or so chars) (I guess i can just do this in python??) Then to search through all the sentences for specific instances within that sentence e.g. "David Cameron" Then I would like to check for positive/negative sentiment in each sentence and count them accordingly NB: I am not really worried too much about accuracy because my data sets are large and also not worried too much about sarcasm. Here are the troubles I am having: All the data sets I can find e.g. the corpus movie review data that comes with NLTK arent in webservice format. It looks like this has had some processing done already. As far as I can see the processing (by stanford) was done with WEKA. Is it not possible for NLTK to do all this on its own? Here all the data sets have already been organised into positive/negative already e.g. polarity dataset http://www.cs.cornell.edu/People/pabo/movie-review-data/ How is this done? (to organise the sentences by sentiment, is it definitely WEKA? or something else?) I am not sure I understand why WEKA and NLTK would be used together. Seems like they do much the same thing. If im processing the data with WEKA first to find sentiment why would I need NLTK? Is it possible to explain why this might be necessary? I have found a few scripts that get somewhat near this task, but all are using the same pre-processed data. Is it not possible to process this data myself to find sentiment in sentences rather than using the data samples given in the link? Any help is much appreciated and will save me much hair! Cheers Ke

    Read the article

  • Lots of questions about file I/O (reading/writing message strings)

    - by Nazgulled
    Hi, For this university project I'm doing (for which I've made a couple of posts in the past), which is some sort of social network, it's required the ability for the users to exchange messages. At first, I designed my data structures to hold ALL messages in a linked list, limiting the message size to 256 chars. However, I think my instructors will prefer if I save the messages on disk and read them only when I need them. Of course, they won't say what they prefer, I need to make a choice and justify the best I can why I went that route. One thing to keep in mind is that I only need to save the latest 20 messages from each user, no more. Right now I have an Hash Table that will act as inbox, this will be inside the user profile. This Hash Table will be indexed by name (the user that sent the message). The value for each element will be a data structure holding an array of size_t with 20 elements (20 messages like I said above). The idea is to keep track of the disk file offsets and bytes written. Then, when I need to read a message, I just need to use fseek() and read the necessary bytes. I think this could work nicely... I could use just one single file to hold all messages from all users in the network. I'm saying one single file because a colleague asked an instructor about saving the messages from each user independently which he replied that it might not be the best approach cause the file system has it's limits. That's why I'm thinking of going the single file route. However, this presents a problem... Since I only need to save the latest 20 messages, I need to discard the older ones when I reach this limit. I have no idea how to do this... All I know is about fread() and fwrite() to read/write bytes from/to files. How can I go to a file offset and say "hey, delete the following X bytes"? Even if I could do that, there's another problem... All offsets below that one will be completely different and I would have to process all users mailboxes to fix the problem. Which would be a pain... So, any suggestions to solve my problems? What do you suggest?

    Read the article

  • Using Android SAXParser, one my my XML Elements is mysteriously breaking in half.

    - by Drennen
    And its not '&' Im using the SAXParser object do parse the actual XML. This is normally done by passing a URL to the XMLReader.Parse method. Because my XML is coming from a POST request to a webservice, I am saving that result as a String and then employing StringReader / InputSource to feed this string back to the XMLReader.Parse method. However, something strange is happening at the 2001st character of the XMLstring. The 'characters' method of the document handler is being called TWICE in between the startElement and endElement methods, effectively breaking my string (in this case a project title) into two pieces. Because I am instantiating objects in my characters method, I am getting two objects instead of one. This line, about 2000 chars into the string fires 'characters' two times, breaking between "Lower" and "Level" <title>SUMC-BOOKSTORE, LOWER LEVEL RENOVATIONS</title> When I bypass the StringReader / InputSource workaround and feed a flat XML file to XMLReader.Parse, it works absolutely fine. Something about StringReader and or InputSource is somehow screwing this up. Here is my method that takes and XML string and parses is through the SAXParser. public void parseXML(String XMLstring) { try { SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); XMLReader xr = sp.getXMLReader(); xr.setContentHandler(this); // Something is happening in the StringReader or InputSource // That cuts the XML element in half at the 2001 character mark. StringReader sr = new StringReader(XMLstring); InputSource is = new InputSource(sr); xr.parse(is); } catch (IOException e) { Log.e("CMS1", e.toString()); } catch (SAXException e) { Log.e("CMS2", e.toString()); } catch (ParserConfigurationException e) { Log.e("CMS3", e.toString()); } } I would greatly appreciate any ideas on how to not have 'characters' firing off twice when I get to this point in the XML String. Or, show me how to use a POST request and still pass off the URL to the Parse function. THANK YOU.

    Read the article

  • Do you still limit line length in code?

    - by Noldorin
    This is a matter on which I would like to gauge the opinion of the community: Do you still limit the length of lines of code to a fixed maximum? This was certainly a convention of the past for many languages; one would typically cap the number of characters per line to a value such as 80 (and more recnetly 100 or 120 I believe). As far as I understand, the primary reasons for limiting line length are: Readability - You don't have to scroll over horizontally when you want to see the end of some lines. Printing - Admittedly (at least in my experience), most code that you are working on does not get printed out on paper, but by limiting the number of characters you can insure that formatting doesn't get messed up when printed. Past editors (?) - Not sure about this one, but I suspect that at some point in the distant past of programming, (at least some) text editors may have been based on a fixed-width buffer. I'm sure there are points that I am still missing out, so feel free to add to these... Now, when I tend to observe C or C# code nowadays, I often see a number of different styles, the main ones being: Line length capped to 80, 100, or even 120 characters. As far as I understand, 80 is the traditional length, but the longer ones of 100 and 120 have appeared because of the widespread use of high resolutions and widescreen monitors nowadays. No line length capping at all. This tends to be pretty horrible to read, and I don't see it too often, though it's certainly not too rare either. Inconsistent capping of line length. The length of some lines are limited to a fixed maximum (or even a maximum that changes depending on the file/location in code), while others (possibly comments) are not at all. My personal preference here (at least recently) has been to cap the line length to 100 in the Visual Studio editor. This means that in a decently sized window (on a non-widescreen monitor), the ends of lines are still fully visible. I can however see a few disadvantages in this, especially when you end up writing code that's indented 3 or 4 levels and then having to include a long string literal - though I often take this as a sign to refactor my code! In particular, I am curious what the C and C# coders (or anyone who uses Visual Studio for that matter) think about this point, though I would be interested in hearing anyone's thoughts on the subject. Edit Thanks for the all answers - I appreciate the variety of opinions here, all presenting sound reasons. Consensus does seem to be tipping in the direction of always (or almost always) limit the line length. Interestingly, it seems to be in various coding standards to limit the line length. Judging by some of the answers, both the Python and Google CPP guidelines set the limit at 80 chars. I haven't seen anything similar regarding C# or VB.NET, but I would be curious to see if there are ones anywhere.

    Read the article

  • Should I deal with files longer than MAX_PATH?

    - by John
    Just had an interesting case. My software reported back a failure caused by a path being longer than MAX_PATH. The path was just a plain old document in My Documents, e.g.: C:\Documents and Settings\Bill\Some Stupid FOlder Name\A really ridiculously long file thats really very very very..........very long.pdf Total length 269 characters (MAX_PATH==260). The user wasn't using a external hard drive or anything like that. This was a file on an Windows managed drive. So my question is this. Should I care? I'm not saying can I deal with the long paths, I'm asking should I. Yes I'm aware of the "\?\" unicode hack on some Win32 APIs, but it seems this hack is not without risk (as it's changing the behaviour of the way the APIs parse paths) and also isn't supported by all APIs . So anyway, let me just state my position/assertions: First presumably the only way the user was able to break this limit is if the app she used uses the special Unicode hack. It's a PDF file, so maybe the PDF tool she used uses this hack. I tried to reproduce this (by using the unicode hack) and experimented. What I found was that although the file appears in Explorer, I can do nothing with it. I can't open it, I can't choose "Properties" (Windows 7). Other common apps can't open the file (e.g. IE, Firefox, Notepad). Explorer will also not let me create files/dirs which are too long - it just refuses. Ditto for command line tool cmd.exe. So basically, one could look at it this way: a rouge tool has allowed the user to create a file which is not accessible by a lot of Windows (e.g. Explorer). I could take the view that I shouldn't have to deal with this. (As an aside, this isn't an vote of approval for a short max path length: I think 260 chars is a joke, I'm just saying that if Windows shell and some APIs can't handle 260 then why should I?). So, is this a fair view? Should I say "Not my problem"? Thanks! John

    Read the article

  • How can I make Swig correctly wrap a char* buffer that is modified in C as a Java Something-or-other

    - by Ukko
    I am trying to wrap some legacy code for use in Java and I was quite happy to see that Swig was able to handle the header file and it generate a great wrapper that almost works. Now I am looking for the deep magic that will make it really work. In C I have a function that looks like this DLL_IMPORT int DustyVoodoo(char *buff, int len, char *curse); This integer returned by this function is an error code in case it fails. The arguments are buff is a character buffer len is the length of the data in the buffer curse the another character buffer that contains the result of calling DustyVoodoo So, you can see where this is going, the result is actually coming back via the third argument. Also len is confusing since it may be the length of both buffers, they are always allocated as being the same size in calling code but given what DustyVoodoo does I don't think that they need be the same. To be safe both buffers should be the same size in practice, say 512 chars. The C code generated for the binding is as follows: SWIGEXPORT jint JNICALL Java_pemapiJNI_DustyVoodoo(JNIEnv *jenv, jclass jcls, jstring jarg1, jint jarg2, jstring jarg3) { jint jresult = 0 ; char *arg1 = (char *) 0 ; int arg2 ; char *arg3 = (char *) 0 ; int result; (void)jenv; (void)jcls; arg1 = 0; if (jarg1) { arg1 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg1, 0); if (!arg1) return 0; } arg2 = (int)jarg2; arg3 = 0; if (jarg3) { arg3 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg3, 0); if (!arg3) return 0; } result = (int)PemnEncrypt(arg1,arg2,arg3); jresult = (jint)result; if (arg1) (*jenv)->ReleaseStringUTFChars(jenv, jarg1, (const char *)arg1); if (arg3) (*jenv)->ReleaseStringUTFChars(jenv, jarg3, (const char *)arg3); return jresult; } It is correct for what it does; however, it misses the fact that cursed is not just an input, it is altered by the function and should be returned as an output. It also does not know that the java Strings are really buffers and should be backed by a suitably sized array. I think that Swig can do the right thing here, I just can't figure out from the documentation how to tell Swig what it needs to know. Any typemap masers in the house?

    Read the article

  • C# Reading and Writing a Char[] to and from a Byte[]

    - by Simon G
    Hi, I have a byte array of around 10,000 bytes which is basically a blob from delphi that contains char, string, double and arrays of various types. This need to be read in and updated via C#. I've created a very basic reader that gets the byte array from the db and converts the bytes to the relevant object type when accessing the property which works fine. My problem is when I try to write to a specific char[] item, it doesn't seem to update the byte array. I've created the following extensions for reading and writing: public static class CharExtension { public static byte ToByte( this char c ) { return Convert.ToByte( c ); } public static byte ToByte( this char c, int position, byte[] blob ) { byte b = c.ToByte(); blob[position] = b; return b; } } public static class CharArrayExtension { public static byte[] ToByteArray( this char[] c ) { byte[] b = new byte[c.Length]; for ( int i = 1; i < c.Length; i++ ) { b[i] = c[i].ToByte(); } return b; } public static byte[] ToByteArray( this char[] c, int positon, int length, byte[] blob ) { byte[] b = c.ToByteArray(); Array.Copy( b, 0, blob, positon, length ); return b; } } public static class ByteExtension { public static char ToChar( this byte[] b, int position ) { return Convert.ToChar( b[position] ); } } public static class ByteArrayExtension { public static char[] ToCharArray( this byte[] b, int position, int length ) { char[] c = new char[length]; for ( int i = 0; i < length; i++ ) { c[i] = b.ToChar( position ); position += 1; } return c; } } to read and write chars and char arrays my code looks like: Byte[] _Blob; // set from a db field public char ubin { get { return _tariffBlob.ToChar( 14 ); } set { value.ToByte( 14, _Blob ); } } public char[] usercaplas { get { return _tariffBlob.ToCharArray( 2035, 10 ); } set { value.ToByteArray( 2035, 10, _Blob ); } } So to write to the objects I can do: ubin = 'C'; // this will update the byte[] usercaplas = new char[10] { 'A', 'B', etc. }; // this will update the byte[] usercaplas[3] = 'C'; // this does not update the byte[] I know the reason is that the setter property is not being called but I want to know is there a way around this using code similar to what I already have? I know a possible solution is to use a private variable called _usercaplas that I set and update as needed however as the byte array is nearly 10,000 bytes in length the class is already long and I would like a simpler approach as to reduce the overall code length and complexity. Thank

    Read the article

  • Make an array of two.

    - by marharépa
    Hello! I'd like to make an array which tells my site's pages where to show in PHP. In $sor["site_id"] i've got two or four chars-lenght strings. example: 23, 42, 13, 1 In my other array (called to $pages_show) i want to give all the site_ids to an other id. $parancs="SELECT * FROM pages ORDER BY id"; $eredmeny=mysql_query($parancs) or die("Hibás SQL:".$parancs); while($sor=mysql_fetch_array($eredmeny)) { $pages[]=array( "id"=>$sor["id"], "name"=>$sor["name"], "title"=>$sor["title"], "description"=>$sor["description"], "keywords"=>$sor["keywords"] ); // this makes my pages array with the information about that page. $shower = explode(", ",$sor["site_id"]); // this is explode my site_id $pages_show[]=array( "id"=>$sor["id"], "where"=>$shower //to 'where' i want to put all the explode's elements one-by-one, to get the result like down ); This script gives me the following result: Array (3) 0 => Array (2) id => "29" where => Array (2) 0 => "17" 1 => "16" 1 => Array (2) id => "30" where => Array (1) 0 => "17" 2 => Array (2) id => "31" where => Array (1) 0 => "17" But in this case I'd like to be this: Array (4) 0 => Array (2) id => "29" where => "17" 1 => Array (2) id => "29" where => "16" 2 => Array (2) id => "30" where => "17" 3 => Array (2) id => "31" where => "17" Thanks for your help.

    Read the article

  • Is there a standard way to encode a .NET string into javascript string for use in MS AJAX?

    - by Rich Andrews
    I'm trying to pass the output of a SQL Server exception to the client using the RegisterStartUpScript method of the MS ScriptManager. This works fine for some errors but when the exception contains single quotes the alert fails. I dont want to only escape single quotes though - Is there a standard function i can call to escape any special chars for use in Javascript? string scriptstring = "alert('" + ex.Message + "');"; ScriptManager.RegisterStartupScript(this, this.GetType(), "Alert", scriptstring , true); Thanks tpeczek, the code almost worked for me :) but with a slight amendment (the escaping of single quotes) it works a treat. I've included my amended version here... public class JSEncode { /// <summary> /// Encodes a string to be represented as a string literal. The format /// is essentially a JSON string. /// /// The string returned includes outer quotes /// Example Output: "Hello \"Rick\"!\r\nRock on" /// </summary> /// <param name="s"></param> /// <returns></returns> public static string EncodeJsString(string s) { StringBuilder sb = new StringBuilder(); sb.Append("\""); foreach (char c in s) { switch (c) { case '\'': sb.Append("\\\'"); break; case '\"': sb.Append("\\\""); break; case '\\': sb.Append("\\\\"); break; case '\b': sb.Append("\\b"); break; case '\f': sb.Append("\\f"); break; case '\n': sb.Append("\\n"); break; case '\r': sb.Append("\\r"); break; case '\t': sb.Append("\\t"); break; default: int i = (int)c; if (i < 32 || i > 127) { sb.AppendFormat("\\u{0:X04}", i); } else { sb.Append(c); } break; } } sb.Append("\""); return sb.ToString(); } } As mentioned below - original source: here

    Read the article

  • conflicting declaration when filling a static std::map class member variable

    - by Max
    I have a class with a static std::map member variable that maps chars to a custom type Terrain. I'm attempting to fill this map in the class's implementation file, but I get several errors. Here's my header file: #ifndef LEVEL_HPP #define LEVEL_HPP #include <bitset> #include <list> #include <map> #include <string> #include <vector> #include "libtcod.hpp" namespace yarl { namespace level { class Terrain { // Member Variables private: std::bitset<5> flags; // Member Functions public: explicit Terrain(const std::string& flg) : flags(flg) {} (...) }; class Level { private: static std::map<char, Terrain> terrainTypes; (...) }; } } #endif and here's my implementation file: #include <bitset> #include <list> #include <map> #include <string> #include <vector> #include "Level.hpp" #include "libtcod.hpp" using namespace std; namespace yarl { namespace level { /* fill Level::terrainTypes */ map<char,Terrain> Level::terrainTypes['.'] = Terrain("00001"); // clear map<char,Terrain> Level::terrainTypes[','] = Terrain("00001"); // clear map<char,Terrain> Level::terrainTypes['\''] = Terrain("00001"); // clear map<char,Terrain> Level::terrainTypes['`'] = Terrain("00001"); // clear map<char,Terrain> Level::terrainTypes[178] = Terrain("11111"); // wall (...) } } I'm using g++, and the errors I get are src/Level.cpp:15: error: conflicting declaration ‘std::map, std::allocator yarl::level::Level::terrainTypes [46]’ src/Level.hpp:104: error: ‘yarl::level::Level::terrainTypes’ has a previous declaration as ‘std::map, std::allocator yarl::level::Level::terrainTypes’ src/Level.cpp:15: error: declaration of ‘std::map, std::allocator yarl::level::Level::terrainTypes’ outside of class is not definition src/Level.cpp:15: error: conversion from ‘yarl::level::Terrain’ to non-scalar type ‘std::map, std::allocator ’ requested src/Level.cpp:15: error: ‘yarl::level::Level::terrainTypes’ cannot be initialized by a non-constant expression when being declared I get a set of these for each map assignment line in the implementation file. Anyone see what I'm doing wrong? Thanks for your help.

    Read the article

  • How does the rsync algorithm correctly identify repeating blocks?

    - by Kai
    I'm on a personal quest to learn how the rsync algorithm works. After some reading and thinking, I've come up with a situation where I think the algorithm fails. I'm trying to figure out how this is resolved in an actual implementation. Consider this example, where A is the receiver and B is the sender. A = abcde1234512345fghij B = abcde12345fghij As you can see, the only change is that 12345 has been removed. Now, to make this example interesting, let's choose a block size of 5 bytes (chars). Hashing the values on the sender's side using the weak checksum gives the following values list. abcde|12345|fghij abcde -> 495 12345 -> 255 fghij -> 520 values = [495, 255, 520] Next we check to see if any hash values differ in A. If there's a matching block we can skip to the end of that block for the next check. If there's a non-matching block then we've found a difference. I'll step through this process. Hash the first block. Does this hash exist in the values list? abcde -> 495 (yes, so skip) Hash the second block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the third block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the fourth block. Does this hash exist in the values list? fghij -> 520 (yes, so skip) No more data, we're done. Since every hash was found in the values list, we conclude that A and B are the same. Which, in my humble opinion, isn't true. It seems to me this will happen whenever there is more than one block that share the same hash. What am I missing?

    Read the article

  • When is ¦ not equal to ¦?

    - by Trey Jackson
    Background. I'm working with netlists, and in general, people specify different hierarchies by using /. However, it's not illegal to actually use a / as a part of an instance name. For example, X1/X2/X3/X4 might refer to instance X4 inside another instance named X1/X2/X3. Or it might refer an instance named X3/X4 inside an instance named X2 inside an instance named X1. Got it? There's really no "regular" character that cannot be used as a part of an instance name, so you resort to a non-printable one, or ... perhaps one outside of the standard 0..127 ASCII chars. I thought I'd try (decimal) 166, because for me it shows up as the pipe: ¦. So... I've got some C++ code which constructs the path name using ¦ as the hierarchical separator, so the path above looks like X1¦X2/X3¦X4. Now the GUI is written in Tcl/Tk, and to properly translate this into human readable terms I need to do something like the following: set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 set humanreadable [join [split $path ¦] /] Basically, replace the ¦ with / (I could also accomplish this with [string map]). Now, the problem is, the ¦ in the string I get from C++ doesn't match the ¦ I can create in Tcl. i.e. This fails: set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 string match $path [format X1%cX2/X3%cX4 166 166] Visually, the two strings look identical, but string match fails. I even tried using scan to see if I'd mixed up the bit values. But set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 set path2 [format X1%cX2/X3%cX4 166 166] for {set i 0} {$i < [string length $path]} {incr i} { set p [string range $path $i $i] set p2 [string range $path2 $i $i] scan %c $p c scan %c $p2 c2 puts [list $p $c :::: $p2 $c2 equal? [string equal $c $c2]] } Produces output which looks like everything should match, except the [string equal] fails for the ¦ characters with a print line: ¦ 166 :::: ¦ 166 equal? 0 For what it's worth, the character in C++ is defined as: const char SEPARATOR = 166; Any ideas why a character outside the regular ASCII range would fail like this? When I changed the separator to (decimal) 28 (^\), things worked fine. I just don't want to get bit by a similar problem on a different platform. (I'm currently using Redhat Linux).

    Read the article

  • Can't find mistake -- 'segmentation fault' - in C

    - by Mosh
    Hello all! I wrote this function but can't get on the problem that gives me 'segmentation fault' msg. Thank you for any help guys !! /*This function extract all header files in a *.c1 file*/ void includes_extractor(FILE *c1_fp, char *c1_file_name ,int c1_file_str_len ) { int i=0; FILE *c2_fp , *header_fp; char ch, *c2_file_name,header_name[80]; /* we can assume line length 80 chars MAX*/ char inc_name[]="include"; char inc_chk[INCLUDE_LEN+1]; /*INCLUDE_LEN is defined | +1 for null*/ /* making the c2 file name */ c2_file_name=(char *) malloc ((c1_file_str_len)*sizeof(char)); if (c2_file_name == NULL) { printf("Out of memory !\n"); exit(0); } strcpy(c2_file_name , c1_file_name); c2_file_name[c1_file_str_len-1] = '\0'; c2_file_name[c1_file_str_len-2] = '2'; /*Open source & destination files + ERR check */ if( !(c1_fp = fopen (c1_file_name,"r") ) ) { fprintf(stderr,"\ncannot open *.c1 file !\n"); exit(0); } if( !(c2_fp = fopen (c2_file_name,"w+") ) ) { fprintf(stderr,"\ncannot open *.c2 file !\n"); exit(0); } /*next code lines are copy char by char from c1 to c2, but if meet header file, copy its content */ ch=fgetc(c1_fp); while (!feof(c1_fp)) { i=0; /*zero i */ if (ch == '#') /*potential #include case*/ { fgets(inc_chk, INCLUDE_LEN+1, c1_fp); /*8 places for "include" + null*/ if(strcmp(inc_chk,inc_name)==0) /*case #include*/ { ch=fgetc(c1_fp); while(ch==' ') /* stop when head with a '<' or '"' */ { ch=fgetc(c1_fp); } /*while(2)*/ ch=fgetc(c1_fp); /*start read header file name*/ while((ch!='"') || (ch!='>')) /*until we get the end of header name*/ { header_name[i] = ch; i++; ch=fgetc(c1_fp); }/*while(3)*/ header_name[i]='\0'; /*close the header_name array*/ if( !(header_fp = fopen (header_name,"r") ) ) /*open *.h for read + ERR chk*/ { fprintf(stderr,"cannot open header file !\n"); exit(0); } while (!feof(header_fp)) /*copy header file content to *.c2 file*/ { ch=fgetc(header_fp); fputc(ch,c2_fp); }/*while(4)*/ fclose(header_fp); } }/*frst if*/ else { fputc(ch,c2_fp); } ch=fgetc(c1_fp); }/*while(1)*/ fclose(c1_fp); fclose(c2_fp); free (c2_file_name); }

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32  | Next Page >