Search Results

Search found 12083 results on 484 pages for 'general purpose'.

Page 440/484 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • MySQL Utility Users' Console Oerview

    - by rudrap
    MySQL Utility Users' Console (mysqluc): The MySQL Utilities Users' Console is designed to make using the utilities easier via a dedicated console. It helps us to use the utilities without worrying about the python and utility paths. Why do we need a special console? - It does provide a unique shell environment with command completion, help for each utility, user defined variables, and type completion for options. - You no longer have to type out the entire name of the utility. - You don't need to remember the name of a database utility you want to use. - You can define variables and reuse them in your utility commands. - It is possible to run utility command along with mysqluc and come out of the mysqluc console. Console commands: mysqluc> help Command Description ----------------------           --------------------------------------------------- help utilities                     Display list of all utilities supported. help <utility>                  Display help for a specific utility. help or help commands   Show this list. exit or quit                       Exit the console. set <variable>=<value>  Store a variable for recall in commands. show options                   Display list of options specified by the user on launch. show variables                 Display list of variables. <ENTER>                       Press ENTER to execute command. <ESCAPE>                     Press ESCAPE to clear the command entry. <DOWN>                       Press DOWN to retrieve the previous command. <UP>                               Press UP to retrieve the next command in history. <TAB>                            Press TAB for type completion of utility, option,or variable names. <TAB><TAB>                Press TAB twice for list of matching type completion (context sensitive). How do I use it? Pre-requisites: - Download the latest version of MySQL Workbench. - Mysql Servers are running. - Your Pythonpath is set. (e.g. Export PYTHONPATH=/...../mysql-utilities/) Check the Version of mysqluc Utility: /usr/bin/python mysqluc.py –version It should display something like this MySQL Utilities mysqluc.py version 1.1.0 - MySQL Workbench Distribution 5.2.44 Copyright (c) 2010, 2012 Oracle and/or its affiliates. All rights reserved. This program is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, to the extent permitted by law. Use of TAB to get the current utilities: mysqluc> mysqldb<TAB><TAB> Utility Description -------------        ------------------------------------------------------------ mysqldbcopy      copy databases from one server to another mysqldbexport    export metadata and data from databases mysqldbimport    import metadata and data from files mysqluc> mysqldbcopy –source=$se<TAB> Variable Value -------- ---------------------------------------------------------------------- server1 root@localhost:3306 server2 root@localhost:3307 you can see the variables starting with se and then decide which to use Run a utility via the console: /usr/bin/python mysqluc.py -e "mysqldbcopy --source=root@localhost:3306 --destination=root@localhost:3307 dbname" Get help for utilities in the console: mysqluc> help utilities Display help for a utility mysqluc> help mysqldbcopy Details about mysqldbcopy and its options set variables and use them in commands: mysqluc> set server1 = root@localhost:3306 mysqluc>show variables Variable Value -------- ---------------------------------------------------------------------- server1    root@localhost:3306 server2    root@localhost:3307 mysqluc> mysqldbcopy –source=$server1 –destination=$server2 dbname <Enter> Mysqldbcopy utility output will display. mysqluc>show options Display list of options specified by the user mysqluc SERVER=root@host123 VAR_A=57 -e "show variables" Variable Value -------- ----------------------------------------------------------------- SERVER root@host123 VAR_A 57 Finding option names for an Utility: mysqluc> mysqlserverclone --n Option Description ------------------- --------------------------------------------------------- --new-data=NEW_DATA the full path to the location of the data directory for the new instance --new-port=NEW_PORT the new port for the new instance - default=3307 --new-id=NEW_ID the server_id for the new instance - default=2 Limitations: User defined variables have a lifetime of the console run time.

    Read the article

  • option page form in my wordpress theme [migrated]

    - by Templategraphy
    here its is my option page code containing no of fields like logo, slider after filling all the information in option page form i want to things After submitting all the form details save information must retain there. Using get_option() extract each input tag value and show that value in front hand like slider image, slider heading, slider description OPTION PAGE CODE: <?php class MySettingsPage { private $options; public function __construct() { add_action( 'admin_menu', array( $this, 'bguru_register_options_page' ) ); add_action( 'admin_init', array( $this, 'bguru_register_settings' ) ); } public function bguru_register_options_page() { // This page will be under "Settings" add_theme_page('Business Guru Options', 'Theme Customizer', 'edit_theme_options', 'bguru-options', array( $this, 'bguru_options_page') ); } public function bguru_options_page() { // Set class property $this->options = get_option( 'bguru_logo' ); $this->options = get_option( 'bguru_vimeo' ); $this->options = get_option( 'bguru_slide_one_image' ); $this->options = get_option( 'bguru_slide_one_heading' ); $this->options = get_option( 'bguru_slide_one_text' ); $this->options = get_option( 'bguru_slogan_heading' ); $this->options = get_option( 'bguru_slogan_description' ); ?> <div class="wrap"> <?php screen_icon(); ?> <h1>Business Guru Options</h1> <form method="post" action="options.php"> <table class="form-table"> <?php // This prints out all hidden setting fields settings_fields( 'defaultbg' ); do_settings_sections( 'defaultbg' ); submit_button(); ?> </table> </form> </div> <?php } /** * Register and add settings */ public function bguru_register_settings() { register_setting('defaultbg','bguru_logo', array( $this, 'sanitize' ) ); register_setting('defaultbg', 'bguru_vimeo', array( $this, 'sanitize' )); register_setting('defaultbg', 'bguru_slide_one_image', array( $this, 'sanitize' )); register_setting('defaultbg', 'bguru_slide_one_heading', array( $this, 'sanitize' )); register_setting('defaultbg', 'bguru_slide_one_text', array( $this, 'sanitize' )); register_setting('defaultbg', 'bguru_slogan_heading', array( $this, 'sanitize' )); register_setting('defaultbg', 'bguru_slogan_description', array( $this, 'sanitize' )); add_settings_section( 'setting_section_id', // ID '<h2>General</h2>', array( $this, 'print_section_info' ), // Callback 'defaultbg' // Page ); add_settings_field( 'bguru_logo', // ID '<label for="bguru_logo">Logo</label>', // Title array($this,'logo_callback' ), // Callback 'defaultbg', // Page 'setting_section_id'// Section ); add_settings_field( 'bguru_vimeo', // ID 'Vimeo', // Vimeo array( $this, 'socialv_callback' ), // Callback 'defaultbg', // Page 'setting_section_id' // Section ); add_settings_field( 'bguru_slide_one_image', // ID 'Slide 1 Image', // Slide 1 Image array( $this, 'slider1img_callback' ), // Callback 'defaultbg', // Page 'setting_section_id' // Section ); add_settings_field( 'bguru_slide_one_heading', // ID 'Slide 1 Heading', // Slide 1 Heading array( $this, 'slider1head_callback' ), // Callback 'defaultbg', // Page 'setting_section_id' // Section ); add_settings_field( 'bguru_slide_one_text', // ID 'Slide 1 Description', // Slide 1 Description array( $this, 'slider1text_callback' ), // Callback 'defaultbg', // Page 'setting_section_id' // Section ); add_settings_field( 'bguru_slogan_heading', // ID 'Slogan Heading', // Slogan Heading array( $this, 'slogan_head_callback' ), // Callback 'defaultbg', // Page 'setting_section_id' // Section ); add_settings_field( 'bguru_slogan_description', // ID 'Slogan Container', // Slogan Container array( $this, 'slogan_descr_callback' ), // Callback 'defaultbg', // Page 'setting_section_id' // Section ); } public function sanitize( $input ) { $new_input = array(); if( isset( $input['bguru_logo'] ) ) $new_input['bguru_logo'] = sanitize_text_field( $input['bguru_logo'] ); if( isset( $input['bguru_vimeo'] ) ) $new_input['bguru_vimeo'] = sanitize_text_field( $input['bguru_vimeo'] ); if( isset( $input['bguru_slide_one_image'] ) ) $new_input['bguru_slide_one_image'] = sanitize_text_field( $input['bguru_slide_one_image'] ); if( isset( $input['bguru_slide_one_heading'] ) ) $new_input['bguru_slide_one_heading'] = sanitize_text_field( $input['bguru_slide_one_heading'] ); if( isset( $input['bguru_slide_one_text'] ) ) $new_input['bguru_slide_one_text'] = sanitize_text_field( $input['bguru_slide_one_text'] ); if( isset( $input['bguru_slogan_heading'] ) ) $new_input['bguru_slogan_heading'] = sanitize_text_field( $input['bguru_slogan_heading'] ); if( isset( $input['bguru_slogan_description'] ) ) $new_input['bguru_slogan_description'] = sanitize_text_field( $input['bguru_slogan_description'] ); return $new_input; } public function print_section_info() { print 'Enter your settings below:'; } public function logo_callback() { printf( '<input type="text" id="bguru_logo" size="50" name="bguru_logo" value="%s" />', isset( $this->options['bguru_logo'] ) ? esc_attr( $this->options['bguru_logo']) : '' ); } public function socialv_callback() { printf( '<input type="text" id="bguru_vimeo" size="50" name="bguru_vimeo" value="%s" />', isset( $this->options['bguru_vimeo'] ) ? esc_attr( $this->options['bguru_vimeo']) : '' ); } public function slider1img_callback() { printf( '<input type="text" id="bguru_slide_one_image" size="50" name="bguru_slide_one_image" value="%s" />', isset( $this->options['bguru_slide_one_image'] ) ? esc_attr( $this->options['bguru_slide_one_image']) : '' ); } public function slider1head_callback() { printf( '<input type="text" id="bguru_slide_one_heading" size="50" name="bguru_slide_one_heading" value="%s" />', isset( $this->options['bguru_slide_one_heading'] ) ? esc_attr( $this->options['bguru_slide_one_heading']) : '' ); } public function slider1text_callback() { printf( '<input type="text" id="bguru_slide_one_text" size="50" name="bguru_slide_one_text" value="%s" />', isset( $this->options['bguru_slide_one_text'] ) ? esc_attr( $this->options['bguru_slide_one_text']) : '' ); } public function slogan_head_callback() { printf( '<input type="text" id="bguru_slogan_heading" size="50" name="bguru_slogan_heading" value="%s" />', isset( $this->options['bguru_slogan_heading'] ) ? esc_attr( $this->options['bguru_slogan_heading']) : '' ); } public function slogan_descr_callback() { printf( '<input type="text" id="bguru_slogan_description" size="50" name="bguru_slogan_description" value="%s" />', isset( $this->options['bguru_slogan_description'] ) ? esc_attr( $this->options['bguru_slogan_description']) : '' ); } } if( is_admin() ) $my_settings_page = new MySettingsPage(); here its my header.php code where i display all the information of option form $bguru_logo_image = get_option('bguru_logo'); if (!empty($bguru_logo_image)) { echo '<div id="logo"><a href="' . home_url() . '"><img src="' . $bguru_logo_image . '" width="218" alt="logo" /></a></div><!--/ #logo-->'; } else { echo '<div id="logo"><a href="' . home_url() . '"><h1>'. get_bloginfo('name') . '</h1></a></div><!--/ #logo-->'; }?> $bguru_social_vimeo = get_option('bguru_vimeo'); if (!empty($bguru_social_vimeo)) { echo '<li class="vimeo"><a target="_blank" href="'.$bguru_social_vimeo.'">Vimeo</a></li>'; }?> same as for slider image, slider heading, slider description please suggest some solutions

    Read the article

  • web.xml not reloading in tomcat even after stop/start

    - by ajay
    This is in relation to:- http://stackoverflow.com/questions/2576514/basic-tomcat-servlet-error I changed my web.xml file, did ant compile , all, /etc/init.d/tomcat stop , start Even then my web.xml file in tomcat deployment is still unchanged. This is build.properties file:- app.name=hello catalina.home=/usr/local/tomcat manager.username=admin manager.password=admin This is my build.xml file. Is there something wrong with this:- <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- General purpose build script for web applications and web services, including enhanced support for deploying directly to a Tomcat 6 based server. This build script assumes that the source code of your web application is organized into the following subdirectories underneath the source code directory from which you execute the build script: docs Static documentation files to be copied to the "docs" subdirectory of your distribution. src Java source code (and associated resource files) to be compiled to the "WEB-INF/classes" subdirectory of your web applicaiton. web Static HTML, JSP, and other content (such as image files), including the WEB-INF subdirectory and its configuration file contents. $Id: build.xml.txt 562814 2007-08-05 03:52:04Z markt $ --> <!-- A "project" describes a set of targets that may be requested when Ant is executed. The "default" attribute defines the target which is executed if no specific target is requested, and the "basedir" attribute defines the current working directory from which Ant executes the requested task. This is normally set to the current working directory. --> <project name="My Project" default="compile" basedir="."> <!-- ===================== Property Definitions =========================== --> <!-- Each of the following properties are used in the build script. Values for these properties are set by the first place they are defined, from the following list: * Definitions on the "ant" command line (ant -Dfoo=bar compile). * Definitions from a "build.properties" file in the top level source directory of this application. * Definitions from a "build.properties" file in the developer's home directory. * Default definitions in this build.xml file. You will note below that property values can be composed based on the contents of previously defined properties. This is a powerful technique that helps you minimize the number of changes required when your development environment is modified. Note that property composition is allowed within "build.properties" files as well as in the "build.xml" script. --> <property file="build.properties"/> <property file="${user.home}/build.properties"/> <!-- ==================== File and Directory Names ======================== --> <!-- These properties generally define file and directory names (or paths) that affect where the build process stores its outputs. app.name Base name of this application, used to construct filenames and directories. Defaults to "myapp". app.path Context path to which this application should be deployed (defaults to "/" plus the value of the "app.name" property). app.version Version number of this iteration of the application. build.home The directory into which the "prepare" and "compile" targets will generate their output. Defaults to "build". catalina.home The directory in which you have installed a binary distribution of Tomcat 6. This will be used by the "deploy" target. dist.home The name of the base directory in which distribution files are created. Defaults to "dist". manager.password The login password of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) manager.url The URL of the "/manager" web application on the Tomcat installation to which we will deploy web applications and web services. manager.username The login username of a user that is assigned the "manager" role (so that he or she can execute commands via the "/manager" web application) --> <property name="app.name" value="myapp"/> <property name="app.path" value="/${app.name}"/> <property name="app.version" value="0.1-dev"/> <property name="build.home" value="${basedir}/build"/> <property name="catalina.home" value="../../../.."/> <!-- UPDATE THIS! --> <property name="dist.home" value="${basedir}/dist"/> <property name="docs.home" value="${basedir}/docs"/> <property name="manager.url" value="http://localhost:8080/manager"/> <property name="src.home" value="${basedir}/src"/> <property name="web.home" value="${basedir}/web"/> <!-- ==================== External Dependencies =========================== --> <!-- Use property values to define the locations of external JAR files on which your application will depend. In general, these values will be used for two purposes: * Inclusion on the classpath that is passed to the Javac compiler * Being copied into the "/WEB-INF/lib" directory during execution of the "deploy" target. Because we will automatically include all of the Java classes that Tomcat 6 exposes to web applications, we will not need to explicitly list any of those dependencies. You only need to worry about external dependencies for JAR files that you are going to include inside your "/WEB-INF/lib" directory. --> <!-- Dummy external dependency --> <!-- <property name="foo.jar" value="/path/to/foo.jar"/> --> <!-- ==================== Compilation Classpath =========================== --> <!-- Rather than relying on the CLASSPATH environment variable, Ant includes features that makes it easy to dynamically construct the classpath you need for each compilation. The example below constructs the compile classpath to include the servlet.jar file, as well as the other components that Tomcat makes available to web applications automatically, plus anything that you explicitly added. --> <path id="compile.classpath"> <!-- Include all JAR files that will be included in /WEB-INF/lib --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <!-- <pathelement location="${foo.jar}"/> --> <!-- Include all elements that Tomcat exposes to applications --> <fileset dir="${catalina.home}/bin"> <include name="*.jar"/> </fileset> <pathelement location="${catalina.home}/lib"/> <fileset dir="${catalina.home}/lib"> <include name="*.jar"/> </fileset> </path> <!-- ================== Custom Ant Task Definitions ======================= --> <!-- These properties define custom tasks for the Ant build tool that interact with the "/manager" web application installed with Tomcat 6. Before they can be successfully utilized, you must perform the following steps: - Copy the file "lib/catalina-ant.jar" from your Tomcat 6 installation into the "lib" directory of your Ant installation. - Create a "build.properties" file in your application's top-level source directory (or your user login home directory) that defines appropriate values for the "manager.password", "manager.url", and "manager.username" properties described above. For more information about the Manager web application, and the functionality of these tasks, see <http://localhost:8080/tomcat-docs/manager-howto.html>. --> <taskdef resource="org/apache/catalina/ant/catalina.tasks" classpathref="compile.classpath"/> <!-- ==================== Compilation Control Options ==================== --> <!-- These properties control option settings on the Javac compiler when it is invoked using the <javac> task. compile.debug Should compilation include the debug option? compile.deprecation Should compilation include the deprecation option? compile.optimize Should compilation include the optimize option? --> <property name="compile.debug" value="true"/> <property name="compile.deprecation" value="false"/> <property name="compile.optimize" value="true"/> <!-- ==================== All Target ====================================== --> <!-- The "all" target is a shortcut for running the "clean" target followed by the "compile" target, to force a complete recompile. --> <target name="all" depends="clean,compile" description="Clean build and dist directories, then compile"/> <!-- ==================== Clean Target ==================================== --> <!-- The "clean" target deletes any previous "build" and "dist" directory, so that you can be ensured the application can be built from scratch. --> <target name="clean" description="Delete old build and dist directories"> <delete dir="${build.home}"/> <delete dir="${dist.home}"/> </target> <!-- ==================== Compile Target ================================== --> <!-- The "compile" target transforms source files (from your "src" directory) into object files in the appropriate location in the build directory. This example assumes that you will be including your classes in an unpacked directory hierarchy under "/WEB-INF/classes". --> <target name="compile" depends="prepare" description="Compile Java sources"> <!-- Compile Java classes as necessary --> <mkdir dir="${build.home}/WEB-INF/classes"/> <javac srcdir="${src.home}" destdir="${build.home}/WEB-INF/classes" debug="${compile.debug}" deprecation="${compile.deprecation}" optimize="${compile.optimize}"> <classpath refid="compile.classpath"/> </javac> <!-- Copy application resources --> <copy todir="${build.home}/WEB-INF/classes"> <fileset dir="${src.home}" excludes="**/*.java"/> </copy> </target> <!-- ==================== Dist Target ===================================== --> <!-- The "dist" target creates a binary distribution of your application in a directory structure ready to be archived in a tar.gz or zip file. Note that this target depends on two others: * "compile" so that the entire web application (including external dependencies) will have been assembled * "javadoc" so that the application Javadocs will have been created --> <target name="dist" depends="compile,javadoc" description="Create binary distribution"> <!-- Copy documentation subdirectories --> <mkdir dir="${dist.home}/docs"/> <copy todir="${dist.home}/docs"> <fileset dir="${docs.home}"/> </copy> <!-- Create application JAR file --> <jar jarfile="${dist.home}/${app.name}-${app.version}.war" basedir="${build.home}"/> <!-- Copy additional files to ${dist.home} as necessary --> </target> <!-- ==================== Install Target ================================== --> <!-- The "install" target tells the specified Tomcat 6 installation to dynamically install this web application and make it available for execution. It does *not* cause the existence of this web application to be remembered across Tomcat restarts; if you restart the server, you will need to re-install all this web application. If you have already installed this application, and simply want Tomcat to recognize that you have updated Java classes (or the web.xml file), use the "reload" target instead. NOTE: This target will only succeed if it is run from the same server that Tomcat is running on. NOTE: This is the logical opposite of the "remove" target. --> <target name="install" depends="compile" description="Install application to servlet container"> <deploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}" localWar="file://${build.home}"/> </target> <!-- ==================== Javadoc Target ================================== --> <!-- The "javadoc" target creates Javadoc API documentation for the Java classes included in your application. Normally, this is only required when preparing a distribution release, but is available as a separate target in case the developer wants to create Javadocs independently. --> <target name="javadoc" depends="compile" description="Create Javadoc API documentation"> <mkdir dir="${dist.home}/docs/api"/> <javadoc sourcepath="${src.home}" destdir="${dist.home}/docs/api" packagenames="*"> <classpath refid="compile.classpath"/> </javadoc> </target> <!-- ====================== List Target =================================== --> <!-- The "list" target asks the specified Tomcat 6 installation to list the currently running web applications, either loaded at startup time or installed dynamically. It is useful to determine whether or not the application you are currently developing has been installed. --> <target name="list" description="List installed applications on servlet container"> <list url="${manager.url}" username="${manager.username}" password="${manager.password}"/> </target> <!-- ==================== Prepare Target ================================== --> <!-- The "prepare" target is used to create the "build" destination directory, and copy the static contents of your web application to it. If you need to copy static files from external dependencies, you can customize the contents of this task. Normally, this task is executed indirectly when needed. --> <target name="prepare"> <!-- Create build directories as needed --> <mkdir dir="${build.home}"/> <mkdir dir="${build.home}/WEB-INF"/> <mkdir dir="${build.home}/WEB-INF/classes"/> <!-- Copy static content of this web application --> <copy todir="${build.home}"> <fileset dir="${web.home}"/> </copy> <!-- Copy external dependencies as required --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> <mkdir dir="${build.home}/WEB-INF/lib"/> <!-- <copy todir="${build.home}/WEB-INF/lib" file="${foo.jar}"/> --> <!-- Copy static files from external dependencies as needed --> <!-- *** CUSTOMIZE HERE AS REQUIRED BY YOUR APPLICATION *** --> </target> <!-- ==================== Reload Target =================================== --> <!-- The "reload" signals the specified application Tomcat 6 to shut itself down and reload. This can be useful when the web application context is not reloadable and you have updated classes or property files in the /WEB-INF/classes directory or when you have added or updated jar files in the /WEB-INF/lib directory. NOTE: The /WEB-INF/web.xml web application configuration file is not reread on a reload. If you have made changes to your web.xml file you must stop then start the web application. --> <target name="reload" depends="compile" description="Reload application on servlet container"> <reload url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> <!-- ==================== Remove Target =================================== --> <!-- The "remove" target tells the specified Tomcat 6 installation to dynamically remove this web application from service. NOTE: This is the logical opposite of the "install" target. --> <target name="remove" description="Remove application on servlet container"> <undeploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> </project>

    Read the article

  • Issue with my wordpress functions.php script

    - by iMayne
    Hello peeps. Im having an issues when activating my theme in wordpress. I got this error message: "Parse error: syntax error, unexpected $end in C:\xampp\htdocs\xampp\wordpress\wp-content\themes\xit\functions.php on line 223". Whats wrong, I totally dont understand. The script of the php is: <?php if ( function_exists('register_sidebar') ) register_sidebar(array( 'before_widget' => '', 'after_widget' => '', 'before_title' => '<h2>', 'after_title' => '</h2>', )); function content($num) { $theContent = get_the_content(); $output = preg_replace('/<img[^>]+./','', $theContent); $limit = $num+1; $content = explode(' ', $output, $limit); array_pop($content); $content = implode(" ",$content)."..."; echo $content; } function post_is_in_descendant_category( $cats, $_post = null ) { foreach ( (array) $cats as $cat ) { // get_term_children() accepts integer ID only $descendants = get_term_children( (int) $cat, 'category'); if ( $descendants && in_category( $descendants, $_post ) ) return true; } return false; } //custom comments function mytheme_comment($comment, $args, $depth) { $GLOBALS['comment'] = $comment; ?> <li <?php comment_class(); ?> id="li-comment-<?php comment_ID() ?>"> <div id="comment-<?php comment_ID(); ?>"> <div class="comment-author vcard"> <div class="comment-meta commentmetadata"> <?php echo get_avatar($comment,$size='32',$default='http://www.gravatar.com/avatar/61a58ec1c1fba116f8424035089b7c71?s=32&d=&r=G' ); ?> <?php printf(__('%1$s at %2$s'), get_comment_date(), get_comment_time()) ?> <br /><?php printf(__('<strong>%s</strong> says:'), get_comment_author_link()) ?><?php edit_comment_link(__('(Edit)'),' ','') ?></div> </div> <?php if ($comment->comment_approved == '0') : ?> <em><?php _e('Your comment is awaiting moderation.') ?></em> <br /> <?php endif; ?> <div class="text"><?php comment_text() ?></div> <div class="reply"> <?php comment_reply_link(array_merge( $args, array('depth' => $depth, 'max_depth' => $args['max_depth']))) ?> </div> </div> <?php } add_action('admin_menu', 'xit_theme_page'); function xit_theme_page () { if ( count($_POST) > 0 && isset($_POST['xit_settings']) ) { $options = array ( 'style','logo_img','logo_alt','logo_txt', 'logo_tagline', 'tagline_width', 'contact_email','ads', 'advertise_page', 'twitter_link', 'facebook_link', 'flickr', 'about_tit', 'about_txt', 'analytics'); foreach ( $options as $opt ) { delete_option ( 'xit_'.$opt, $_POST[$opt] ); add_option ( 'xit_'.$opt, $_POST[$opt] ); } } add_theme_page(__('Xit Options'), __('Xit Options'), 'edit_themes', basename(__FILE__), 'xit_settings'); } function xit_settings () {?> <div class="wrap"> <h2>XIT Options Panel</h2> <form method="post" action=""> <table class="form-table"> <!-- General settings --> <tr> <th colspan="2"><strong>General Settings</strong></th> </tr> <tr valign="top"> <th scope="row"><label for="style">Theme Color Scheme</label></th> <td> <select name="style" id="style"> <option value="pink.css" <?php if(get_option('xit_style') == 'pink.css'){?>selected="selected"<?php }?>>pink.css</option> <option value="blue.css" <?php if(get_option('xit_style') == 'blue.css'){?>selected="selected"<?php }?>>blue.css</option> <option value="orange.css" <?php if(get_option('xit_style') == 'orange.css'){?>selected="selected"<?php }?>>orange.css</option> </select> </td> </tr> <tr valign="top"> <th scope="row"><label for="logo_img">Logo image (full path to image)</label></th> <td> <input name="logo_img" type="text" id="logo_img" value="<?php echo get_option('xit_logo_img'); ?>" class="regular-text" /> </td> </tr> <tr valign="top"> <th scope="row"><label for="logo_alt">Logo image ALT text</label></th> <td> <input name="logo_alt" type="text" id="logo_alt" value="<?php echo get_option('xit_logo_alt'); ?>" class="regular-text" /> </td> </tr> <tr valign="top"> <th scope="row"><label for="logo_txt">Text logo</label></th> <td> <input name="logo_txt" type="text" id="logo_txt" value="<?php echo get_option('xit_logo_txt'); ?>" class="regular-text" /> <br /><em>Leave this empty if you entered an image as logo</em> </td> </tr> <tr valign="top"> <th scope="row"><label for="logo_tagline">Logo Tag Line</label></th> <td> <input name="logo_tagline" type="text" id="logo_tagline" value="<?php echo get_option('xit_logo_tagline'); ?>" class="regular-text" /> </td> </tr> <tr valign="top"> <th scope="row"><label for="tagline_width">Tag Line Box Width (px)</label><br /><em style="font-size:11px">Default width: 300px</em></th> <td> <input name="tagline_width" type="text" id="tagline_width" value="<?php echo get_option('xit_tagline_width'); ?>" class="regular-text" /> </td> </tr> <tr valign="top"> <th scope="row"><label for="contact_email">Email Address for Contact Form</label></th> <td> <input name="contact_email" type="text" id="contact_email" value="<?php echo get_option('xit_contact_email'); ?>" class="regular-text" /> </td> </tr> <tr valign="top"> <th scope="row"><label for="twitter_link">Twitter link</label></th> <td> <input name="twitter_link" type="text" id="twitter_link" value="<?php echo get_option('xit_twitter_link'); ?>" class="regular-text" /> </td> </tr> <tr valign="top"> <th scope="row"><label for="facebook_link">Facebook link</label></th> <td> <input name="facebook_link" type="text" id="facebook_link" value="<?php echo get_option('xit_facebook_link'); ?>" class="regular-text" /> </td> </tr> <tr valign="top"> <th scope="row"><label for="flickr">Flickr Photostream</label></th> <td> <select name="flickr" id="flickr"> <option value="yes" <?php if(get_option('xit_flickr') == 'yes'){?>selected="selected"<?php }?>>Yes</option> <option value="no" <?php if(get_option('xit_flickr') == 'no'){?>selected="selected"<?php }?>>No</option> </select> <br /><em>Make sure you have FlickrRSS plugin activated if you choose to enable Flickr Photostream</em> </td> </tr> <!-- Sidebar ABout Box--> <tr> <th colspan="2"><strong>Sidebar About Box</strong></th> </tr> <tr valign="top"> <th scope="row"><label for="about_tit">Title</label></th> <td> <input name="about_tit" type="text" id="about_tit" value="<?php echo get_option('xit_about_tit'); ?>" class="regular-text" /> </td> </tr> <tr valign="top"> <th scope="row"><label for="about_txt">Text</label></th> <td> <textarea cols="60" rows="5" name="about_txt" type="text" id="about_txt" class="regular-text" /><?php echo get_option('xit_about_txt'); ?></textarea> </td> </tr> <!-- Ads Box Settings --> <tr> <th colspan="2"><strong>Ads Box Settings</strong></th> </tr> <tr> <th><label for="ads">Ads Section Enabled:</label></th> <td> <select name="ads" id="ads"> <option value="yes" <?php if(get_option('xit_ads') == 'yes'){?>selected="selected"<?php }?>>Yes</option> <option value="no" <?php if(get_option('xit_ads') == 'no'){?>selected="selected"<?php }?>>No</option> </select> <br /><em>Make sure you have AdMinister plugin activated and have the position "Sidebar" created within the plugin.</em> </td> </tr> <tr valign="top"> <th scope="row"><label for="advertise_page">Advertise Page</label></th> <td> <?php wp_dropdown_pages("name=advertise_page&show_option_none=".__('- Select -')."&selected=" .get_option('xit_advertise_page')); ?> </td> </tr> <!-- Google Analytics --> <tr> <th><label for="ads">Google Analytics code:</label></th> <td> <textarea name="analytics" id="analytics" rows="7" cols="70" style="font-size:11px;"><?php echo stripslashes(get_option('xit_analytics')); ?></textarea> </td> </tr> </table> <p class="submit"> <input type="submit" name="Submit" class="button-primary" value="Save Changes" /> <input type="hidden" name="xit_settings" value="save" style="display:none;" /> </p> </form> </div> <? }?> <?php function get_first_image() { global $post, $posts; $first_img = ''; ob_start(); ob_end_clean(); $output = preg_match_all('/<img.+src=[\'"]([^\'"]+)[\'"].*>/i', $post->post_content, $matches); $first_img = $matches [1] [0]; if(empty($first_img)){ //Defines a default image $first_img = "/images/default.jpg"; } return $first_img; } ?> The last line is line 223.

    Read the article

  • Flat File Connection Manager in SSIS package shows "Valid File Name Must be Selected"

    - by Traples
    (Flat File Location) Samba Share | Windows Share (SSIS) _______________________________ | | XP 32bit | Works | Works | | 2003 Serv 32bit | Works | Works | | Vista 64bit | ERROR | Works | | Win 7 64bit | ERROR | Works | | 2008 Serv 64bit | ERROR | Works I created an SSIS package in VS 2008 that parses a flat file from a shared folder and puts the records into a SQL Server db. I recently installed Windows 7 and VS 2008 on a new workstation. When I import the package from TFS and open it, I get the error Validation error. Parse and Import Catalog Flat File: MySSISPackage: The file name "\\shared\flatfile.txt" specified in the connection was not valid. When I open the Flat File Connection Manager Editor, there is an error stating: A valid file name must be selected I can browse to and select the file from inside the editor, but I cannot change any properties, or move away from the General tab because of this error. If I go back to my laptop (Windows XP), where the package was first created, there is no error. Both workstations are on the same domain, and I'm logging in using the same credentials. Any ideas as to why I would receive this error from one workstation and not another? UPDATE: If I take the .dtsx package from the running workstation and load it into SSIS on the server, I get the following errors when it tries to run: Error: The file name "\\shared\flatfile.txt" specified in the connection was not valid. and... Error: Connection "MySSISPackage" failed validation. and... Error: The file name property is not valid. The file name is a device or contains invalid characters. UPDATE 2: a) The Shared folder I'm trying to pull the flat file from is a Samba share on a Unix box. b) If I access the file using SSIS on any 64-bit platform (Windows 7 64-bit, Vista 64-bit, Windows Server 2008) I get the error "A valid file name must be selected." c) Accessing the file using SSIS from 32-bit environments (Windows XP 32-bit, Windows Server 2003 32-bit) there is no problem. d) If I move the file to a shared folder on a Windows server, 64-bit SSIS recognizes the file.

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • Delphi - WndProc() in thread never called

    - by Robert Oschler
    I had code that worked fine when running in the context of the main VCL thread. This code allocated it's own WndProc() in order to handle SendMessage() calls. I am now trying to move it to a background thread because I am concerned that the SendMessage() traffic is affecting the main VCL thread adversely. So I created a worker thread with the sole purpose of allocating the WndProc() in its thread Execute() method to ensure that the WndProc() existed in the thread's execution context. The WndProc() handles the SendMessage() calls as they come in. The problem is that the worker thread's WndProc() method is never triggered. Note, doExecute() is part of a template method that is called by my TThreadExtended class which is a descendant of Delphi's TThread. TThreadExtended implements the thread Execute() method and calls doExecute() in a loop. I triple-checked and doExecute() is being called repeatedly. Also note that I call PeekMessage() right after I create the WndProc() in order to make sure that Windows creates a message queue for the thread. However something I am doing is wrong since the WndProc() method is never triggered. Here's the code below: // ========= BEGIN: CLASS - TWorkerThread ======================== constructor TWorkerThread.Create; begin FWndProcHandle := 0; inherited Create(false); end; // --------------------------------------------------------------- // This call is the thread's Execute() method. procedure TWorkerThread.doExecute; var Msg: TMsg; begin // Create the WndProc() in our thread's context. if FWndProcHandle = 0 then begin FWndProcHandle := AllocateHWND(WndProc); // Call PeekMessage() to make sure we have a window queue. PeekMessage(Msg, FWndProcHandle, 0, 0, PM_NOREMOVE); end; if Self.Terminated then begin // Get rid of the WndProc(). myDeallocateHWnd(FWndProcHandle); end; // Sleep a bit to avoid hogging the CPU. Sleep(5); end; // --------------------------------------------------------------- procedure TWorkerThread.WndProc(Var Msg: TMessage); begin // THIS CODE IS NEVER CALLED. try if Msg.Msg = WM_COPYDATA then begin // Is LParam assigned? if (Msg.LParam > 0) then begin // Yes. Treat it as a copy data structure. with PCopyDataStruct(Msg.LParam)^ do begin ... // Here is where I do my work. end; end; // if Assigned(Msg.LParam) then end; // if Msg.Msg = WM_COPYDATA then finally Msg.Result := 1; end; // try() end; // --------------------------------------------------------------- procedure TWorkerThread.myDeallocateHWnd(Wnd: HWND); var Instance: Pointer; begin Instance := Pointer(GetWindowLong(Wnd, GWL_WNDPROC)); if Instance <> @DefWindowProc then begin // Restore the default windows procedure before freeing memory. SetWindowLong(Wnd, GWL_WNDPROC, Longint(@DefWindowProc)); FreeObjectInstance(Instance); end; DestroyWindow(Wnd); end; // --------------------------------------------------------------- // ========= END : CLASS - TWorkerThread ======================== Thanks, Robert

    Read the article

  • Entity Framework SaveChanges error details

    - by Marek Karbarz
    When saving changes with SaveChanges on a data context is there a way to determine which Entity causes an error? For example, sometimes I'll forget to assign a date to a non-nullable date field and get "Invalid Date Range" error, but I get no information about which entity or which field it's caused by (I can usually track it down by painstakingly going through all my objects, but it's very time consuming). Stack trace is pretty useless as it only shows me an error at the SaveChanges call without any additional information as to where exactly it happened. Note that I'm not looking to solve any particular problem I have now, I would just like to know in general if there's a way to tell which entity/field is causing a problem. Quick sample of a stack trace as an example - in this case an error happened because CreatedOn date was not set on IAComment entity, however it's impossible to tell from this error/stack trace [SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.] System.Data.SqlTypes.SqlDateTime.FromTimeSpan(TimeSpan value) +2127345 System.Data.SqlTypes.SqlDateTime.FromDateTime(DateTime value) +232 System.Data.SqlClient.MetaType.FromDateTime(DateTime dateTime, Byte cb) +46 System.Data.SqlClient.TdsParser.WriteValue(Object value, MetaType type, Byte scale, Int32 actualLength, Int32 encodingByteSize, Int32 offset, TdsParserStateObject stateObj) +4997789 System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc) +6248 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +987 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +32 System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +141 System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +12 System.Data.Common.DbCommand.ExecuteReader(CommandBehavior behavior) +10 System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues) +8084396 System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) +267 [UpdateException: An error occurred while updating the entries. See the inner exception for details.] System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) +389 System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) +163 System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) +609 IADAL.IAController.Save(IAHeader head) in C:\Projects\IA\IADAL\IAController.cs:61 IA.IAForm.saveForm(Boolean validate) in C:\Projects\IA\IA\IAForm.aspx.cs:198 IA.IAForm.advance_Click(Object sender, EventArgs e) in C:\Projects\IA\IA\IAForm.aspx.cs:287 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +118 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +112 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +5019

    Read the article

  • Binding to the selected item in an ItemsControl

    - by Jensen
    I created a custom ComboBox as follows: (note, code is not correct but you should get the general idea.) The ComboBox contains 2 dependency properties which matter: TitleText and DescriptionText. <Grid> <TextBlock x:Name="Title"/> <Grid x:Name="CBG"> <ToggleButton/> <ContentPresenter/> <Popup/> </Grid> </Grid> I want to use this ComboBox to display a wide range of options. I created a class called Setting which inherits from DependencyObject to create usable items, I created a DataTemplate to bind the contents of this Settings object to my ComboBox and created a UserControl which contains an ItemControl which has as a template my previously mentioned DataTemplate. I can fill it with Setting objects. <DataTemplate x:Key="myDataTemplate"> <ComboBox TitleText="{Binding Title}" DescriptionText="{Binding DescriptionText}"/> </DataTemplate> <UserControl> <Grid> <StackPanel Grid.Column="0"> <ItemsControl Template="{StaticResource myDataTemplate}"> <Item> <Setting Title="Foo" Description="Bar"> <Option>Yes</Option><Option>No</Option> </Setting> </Item> </ItemsControl> </StackPanel> <StackPanel Grid.Column="1"> <TextBlock x:Name="Description"/> </StackPanel> </Grid> </UserControl> I would like to have the DescriptionText of the selected ComboBox (selected by either the IsFocus of the ComboBox control or the IsOpen property of the popup) to be placed in the Description TextBlock in my UserControl. One way I managed to achieve this was replacing my ItemsControl by a ListBox but this caused several issues: it always showed a scrollbar even though I disabled it, it wouldn't catch focus when my popup was open but only when I explicitly selected the item in my ListBox, when I enabled the OverridesDefaultStyle property the contents of the ListBox wouldn't show up at all, I had to re-theme the ListBox control to match my UserControl layout... What's the best and easiest way to get my DescriptionText to show up without using a ListBox or creating a custom Selector control (as that had the same effect as a ListBox)? The goal at the end is to loop through all the items (maybe get them into an ObservableCollection or some sort and to save them into my settings file.

    Read the article

  • ActiveDirectoryMembershipProvider and ADAM (or AD LDS) and SetPassword

    - by Iulian
    By the subject line it seems to be a rather broad subject and I need some help here. Basically what I want is to use ActiveDirectoryMembershipProvider with an ADAM instance to authenticate users in an ASP.NET web application. My development environment is a windows 7 machine with an AD LDS instance on it whilst the QA server is a Windows 2003 server with an ADAM instance on it. I have all the required users on both instances plus one with adminsitrator role (CN=Admin,CN=xxx,DC=xxx,C=xx) which I want to use as the connection user. Using connectonProtecton="None" connectionUsername="CN=Admin,CN=xxx,DC=xxx,C=xx" connectionPassword="xxx" I am able to authenticate on both environments (dev & qa). If I change to the connectionProtection to "Secure" I am not able to authenticate anymore; the error I get is "Parser Error Message: Unable to establish secure connection with the server" To me it sounds wrong to use connectionProtection="None" although I found on the net a lot of samples using this setting. Can I use connectionProtection="Secure" to connect to an ADAM instance using an account defined on that instance having Administrator role? What other choices do I have (like using an domain account)? What if my machine where I am to deploy the application is not a part of the domain, will this affect in any way the behavior? I am novice in the respect so I would really appreciate some clear answers or some directions as where to look? Now beside the "signing in" feature of the ActiveDirectoryMembershipProvider I also want to add an extra one, which is setting the password without knowing the old one (something that will be used by a "reset password" feature). So I added a couple of extension methods to the provider, and used System.DirectoryServices classes like DirectoryEntry and the like. When creating a directory entry I use the same credentials provided in web.config for the provider minus the AuthenticationType as I don't know what is right combination of the flags that corresponds to None/Secure. I am able to use Invoke "SetPassword" with ADS_OPTION_PASSWORD_METHOD option as ADS_PASSWORD_ENCODE_CLEAR on my dev machine (w/ AD LDS instance); nevertheless on qa environment (w/ ADAM instance) I am getting an error like "Exception Details: System.DirectoryServices.DirectoryServicesCOMException: An operations error occurred. (Exception from HRESULT: 0x80072020)" I am quite sure it is not about AD LDS vs ADAM but probably another configuration / permission issue. So can anyone help me with some hints on how to use this SetPassword feature? And as a general question what are the best practices when it comes to using ADAM regarding security, programming etc? Thanks in advance Iulian

    Read the article

  • WPF XAML: how to get StackPanel's children to fill maximum space downward?

    - by Edward Tanguay
    I simply want flowing text on the left, and a help box on the right. The help box should extend all the way to the bottom. If you take out the outer StackPanel below it works great. But for reasons of layout (I'm inserting UserControls dynamically) I need to have the wrapping StackPanel. How do I get the GroupBox to extend down to the bottom of the StackPanel, as you can see I've tried: VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" Height="Auto" XAML: <Window x:Class="TestDynamic033.Test3" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Test3" Height="300" Width="600"> <StackPanel VerticalAlignment="Stretch" Height="Auto"> <DockPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Height="Auto" Margin="10"> <GroupBox DockPanel.Dock="Right" Header="Help" Width="100" Background="Beige" VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" Height="Auto"> <TextBlock Text="This is the help that is available on the news screen." TextWrapping="Wrap" /> </GroupBox> <StackPanel DockPanel.Dock="Left" Margin="10" Width="Auto" HorizontalAlignment="Stretch"> <TextBlock Text="Here is the news that should wrap around." TextWrapping="Wrap"/> </StackPanel> </DockPanel> </StackPanel> </Window> Answer: Thanks Mark, using DockPanel instead of StackPanel cleared it up. In general, I find myself using DockPanel more and more now for WPF layouting, here's the fixed XAML: <Window x:Class="TestDynamic033.Test3" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Test3" Height="300" Width="600" MinWidth="500" MinHeight="200"> <DockPanel VerticalAlignment="Stretch" Height="Auto"> <DockPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Height="Auto" MinWidth="400" Margin="10"> <GroupBox DockPanel.Dock="Right" Header="Help" Width="100" VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" Height="Auto"> <Border CornerRadius="3" Background="Beige"> <TextBlock Text="This is the help that is available on the news screen." TextWrapping="Wrap" Padding="5"/> </Border> </GroupBox> <StackPanel DockPanel.Dock="Left" Margin="10" Width="Auto" HorizontalAlignment="Stretch"> <TextBlock Text="Here is the news that should wrap around." TextWrapping="Wrap"/> </StackPanel> </DockPanel> </DockPanel> </Window>

    Read the article

  • Prevent recursive CTE visiting nodes multiple times

    - by bacar
    Consider the following simple DAG: 1->2->3->4 And a table, #bar, describing this (I'm using SQL Server 2005): parent_id child_id 1 2 2 3 3 4 //... other edges, not connected to the subgraph above Now imagine that I have some other arbitrary criteria that select the first and last edges, i.e. 1-2 and 3-4. I want to use these to find the rest of my graph. I can write a recursive CTE as follows (I'm using terminology from MSDN): with foo(parent_id,child_id) as ( // anchor member that happens to select first and last edges: select parent_id,child_id from #bar where parent_id in (1,3) union all // recursive member: select #bar.* from #bar join foo on #bar.parent_id = foo.child_id ) select parent_id,child_id from foo However, this results in edge 3-4 being selected twice: parent_id child_id 1 2 3 4 2 3 3 4 // 2nd appearance! How can I prevent the query from recursing into subgraphs that have already been described? I could achieve this if, in my "recursive member" part of the query, I could reference all data that has been retrieved by the recursive CTE so far (and supply a predicate indicating in the recursive member excluding nodes already visited). However, I think I can access data that was returned by the last iteration of the recursive member only. This will not scale well when there is a lot of such repetition. Is there a way of preventing this unnecessary additional recursion? Note that I could use "select distinct" in the last line of my statement to achieve the desired results, but this seems to be applied after all the (repeated) recursion is done, so I don't think this is an ideal solution. Edit - hainstech suggests stopping recursion by adding a predicate to exclude recursing down paths that were explicitly in the starting set, i.e. recurse only where foo.child_id not in (1,3). That works for the case above only because it simple - all the repeated sections begin within the anchor set of nodes. It doesn't solve the general case where they may not be. e.g., consider adding edges 1-4 and 4-5 to the above set. Edge 4-5 will be captured twice, even with the suggested predicate. :(

    Read the article

  • WCF ReliableSession and Timeouts

    - by user80108
    I have a WCF service used mainly for managing documents in a repository. I used the chunking channel sample from MS so that I could upload/download huge files. Now I implemented reliable session with the service and I am seeing some strange behaviors. Here are the timeout values I am using. this.SendTimeout = new TimeSpan(0,10,0); this.OpenTimeout = new TimeSpan(0, 1, 0); this.CloseTimeout = new TimeSpan(0, 1, 0); this.ReceiveTimeout = new TimeSpan(0,10, 0); reliableBe.InactivityTimeout = new TimeSpan(0,2,0); I have the following issues. 1. If the Service is not up & running, the clients are not get disconnected after OpenTimeout. I tried it with my test client. Scenario 1: Without Reliable Session: I get the following exception: Could not connect to net.tcp://localhost:8788/MediaManagementService/ep1. The connection attempt lasted for a time span of 00:00:00.9848790. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8788 This is the correct behavior as I have given the OpenTimeout as 1 sec. Scenario 2: With ReliableSession: I get the same exception: Could not connect to net.tcp://localhost:8788/MediaManagementService/ep1. The connection attempt lasted for a time span of 00:00:00.9692460. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8788. But this message comes after around 10 mintes . (I believe after SendTimeout) So here I just have enabled the reliable session and now it looks like the OpenTimeout = SendTimeout for the client. Is this desired behavior? 2: Issue while uploading huge files with ReliableSession: The general rule is that you have to set a huge value for the maxReceivedMessageSize, SendTimeout and ReceiveTimeout. But in the case of Chunking channel, the max received message size doesn't matter as the data is sent in chunks. So I set a huge value for Send and ReceiveTimeout : say 10 hours. Now the upload is going fine, but it has a side effect that, even if the Service is not up, it takes 10 hours to timeout the client connection due to the behavior mentioned in (1). Please let me know your thoughts on this behavior.

    Read the article

  • List of freely available programming books

    - by Karan Bhangui
    I'm trying to amass a list of programming books with opensource licenses, like Creative Commons, GPL, etc. The books can be about a particular programming language or about computers in general. Hoping you guys could help: Languages BASH Advanced Bash-Scripting Guide (An in-depth exploration of the art of shell scripting) C The C book C++ Thinking in C++ C++ Annotations How to Think Like a Computer Scientist C# .NET Book Zero: What the C or C++ Programmer Needs to Know About C# and the .NET Framework Illustrated C# 2008 (Dead Link) Data Structures and Algorithms with Object-Oriented Design Patterns in C# Threading in C# Common Lisp Practical Common Lisp On Lisp Java Thinking in Java How to Think Like a Computer Scientist Java Thin-Client Programming JavaScript Eloquent JavaScript Haskell Real world Haskell Learn You a Haskell for Great Good! Objective-C The Objective-C Programming Language Perl Extreme Perl (license not specified - home page is saying "freely available") The Mason Book (Open Publication License) Practical mod_perl (CreativeCommons Attribution Share-Alike License) Higher-Order Perl Learning Perl the Hard Way PHP Practical PHP Programming Zend Framework: Survive the Deep End PowerShell Mastering PowerShell Prolog Building Expert Systems in Prolog Adventure in Prolog Prolog Programming A First Course Logic, Programming and Prolog (2ed) Introduction to Prolog for Mathematicians Learn Prolog Now! Natural Language Processing Techniques in Prolog Python Dive Into Python Dive Into Python 3 How to Think Like a Computer Scientist A Byte of Python Python for Fun Invent Your Own Computer Games With Python Ruby Why's (Poignant) Guide to Ruby Programming Ruby - The Pragmatic Programmer's Guide Mr. Neighborly's Humble Little Ruby Book SQL Practical PostgreSQL x86 assembly Paul Carter's tutorial Lua Programming In Lua (for v5 but still largely relevant) Algorithms and Data Structures Algorithms Data Structures and Algorithms with Object-Oriented Design Patterns in Java Planning Algorithms Frameworks/Projects The Django Book The Pylons Book Introduction to Design Patterns in C++ with Qt 4 (Open Publication License) Version control The SVN Book Mercurial: The Definitive Guide Pro Git UNIX / Linux The Art of Unix Programming Linux Device Drivers, Third Edition Others Structure and Interpretation of Computer Programs The Little Book of Semaphores Mathematical Logic - an Introduction An Introduction to the Theory of Computation Developers Developers Developers Developers Linkers and loaders Beej's Guide to Network Programming Maven: The Definitive Guide I will expand on this list as I get comments or when I think of more :D Related: Programming texts and reference material for my Kindle What are some good free programming books? Can anyone recommend a free software engineering book? Edit: Oh I didn't notice the community wiki feature. Feel free to edit your suggestions right in!

    Read the article

  • Managing database connections in an Android Activity

    - by Daniel Lew
    I have an application with a ListActivity that uses a CursorAdapter as its adapter. The ListActivity opens the database and does the querying for the CursorAdapter, which is all well and good, but I am having issues with figuring out when to close both the Cursor and the SQLiteDatabase. The way things are handled right now, if the user finishes the activity, I close the database and the cursor. However, this still ends up with the DalvikVM warning me that I've left a database open - for example, if the user hits the "home" button (leaving the activity in the task's stack), rather than the "back" button. If I close them during pause and then re-query during resume, then I don't get any errors, but then a user cannot return to the list without it requerying (and thus losing the user's place in the list). By this I mean, the user can click on any item in the list and open a new activity based on it, but will often want to hit "back" afterwards and return to the same place on the list. If I requery, then I cannot return the user back to the correct spot. What is the proper way to handle this issue? I want the list to remain scrolled properly, but I don't want the VM to keep complaining about unclosed databases. Edit: Here's a general outline of how I handle the code at the moment: public class MyListActivity extends ListActivity { private Cursor mCursor; private CursorAdapter mAdapter; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mAdapter = new MyCursorAdapter(this); setListAdapter(mAdapter); } protected void onPause() { super.onPause(); if (isFinishing()) { mCursor.close(); } } protected void onDestroy() { super.onDestroy(); mCursor.close(); } private void updateQuery() { // If we had a cursor open before, close it. if (mCursor != null) { mCursor.close(); } MyDbHelper dbHelper = new MyDbHelper(this); SQLiteDatabase db = dbHelper.getReadableDatabase(); mCursor = db.query(...); mAdapter.changeCursor(mCursor); db.close(); } } updateQuery() can be called multiple times because the user can filter the results via menu items (I left this part out of the code, as the problem still occurs even if the user does no filtering). Again, the issue is that when I hit home I get leak errors. Yet, after going home, I can go back to the app and find my list again - cursor fully intact.

    Read the article

  • Focusable EditText inside ListView

    - by Joe
    I've spent about 6 hours on this so far, and been hitting nothing but roadblocks. The general premise is that there is some row in a ListView (whether it's generated by the adapter, or added as a header view) that contains an EditText widget and a Button. All I want to do is be able to use the jogball/arrows, to navigate the selector to individual items like normal, but when I get to a particular row -- even if I have to explicitly identify the row -- that has a focusable child, I want that child to take focus instead of indicating the position with the selector. I've tried many possibilities, and have so far had no luck. layout: <ListView android:id="@android:id/list" android:layout_height="fill_parent" android:layout_width="fill_parent" /> Header view: EditText view = new EditText(this); listView.addHeaderView(view, null, true); Assuming there are other items in the adapter, using the arrow keys will move the selection up/down in the list, as expected; but when getting to the header row, it is also displayed with the selector, and no way to focus into the EditText using the jogball. Note: tapping on the EditText will focus it at that point, however that relies on a touchscreen, which should not be a requirement. ListView apparently has two modes in this regard: 1. setItemsCanFocus(true): selector is never displayed, but the EditText can get focus when using the arrows. Focus search algorithm is hard to predict, and no visual feedback (on any rows: having focusable children or not) on which item is selected, both of which can give the user an unexpected experience. 2. setItemsCanFocus(false): selector is always drawn in non-touch-mode, and EditText can never get focus -- even if you tap on it. To make matters worse, calling editTextView.requestFocus() returns true, but in fact does not give the EditText focus. What I'm envisioning is basically a hybrid of 1 & 2, where rather than the list setting if all items are focusable or not, I want to set focusability for a single item in the list, so that the selector seamlessly transitions from selecting the entire row for non-focusable items, and traversing the focus tree for items that contain focusable children. Any takers?

    Read the article

  • ADO.NET (WCF) Data Services Query Interceptor Hangs IIS

    - by PreMagination
    I have an ADO.NET Data Service that's supposed to provide read-only access to a somewhat complex database. Logically I have table-per-type (TPT) inheritance in my data model but the EDM doesn't implement inheritance. (Limitation of EF and navigation properties on derived types. STILL not fixed in EF4!) I can query my EDM directly (using a separate project) using a copy of the query I'm trying to run against the web service, results are returned within 10 seconds. Disabling the query interceptors I'm able to make the same query against the web service, results are returned similarly quickly. I can enable some of the query interceptors and the results are returned slowly, up to a minute or so later. Alternatively, I can enable all the query interceptors, expand less of the properties on the main object I'm querying, and results are returned in a similar period of time. (I've increased some of the timeout periods) Up til this point Sql Profiler indicates the slow-down is the database. (That's a post for a different day) But when I enable all my query interceptors and expand all the properties I'd like to have the IIS worker process pegs the CPU for 20 minutes and a query is never even made against the database. This implies to me that yes, my implementation probably sucks but regardless the Data Services "tier" is having an issue it shouldn't. WCF tracing didn't reveal anything interesting to my untrained eye. Details: Data model: Agent-Person-Student Student has a collection of referrals Students and referrals are private, queries against the web service should only return "your" students and referrals. This means Person and Agent need to be filtered too. Other entities (Agent-Organization-School) can be accessed by anyone who has authenticated. The existing security model is poorly suited to perform this type of filtering for this type of data access, the query interceptors are complicated and cause EF to generate some entertaining sql queries. Sample Interceptor [QueryInterceptor("Agents")] public Expression<Func<Agent, Boolean>> OnQueryAgents() { //Agent is a Person(1), Educator(2), Student(3), or Other Person(13); allow if scope permissions exist return ag => (ag.AgentType.AgentTypeId == 1 || ag.AgentType.AgentTypeId == 2 || ag.AgentType.AgentTypeId == 3 || ag.AgentType.AgentTypeId == 13) && ag.Person.OrganizationPersons.Count<OrganizationPerson>(op => op.Organization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124) || op.Organization.HierarchyDescendents.Any<OrganizationsHierarchy>(oh => oh.AncestorOrganization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124))) > 0; } The query interceptors for Person, Student, Referral are all very similar, ie they traverse multiple same/similar tables to look for ScopePermissions as above. Sample Query var referrals = (from r in service.Referrals .Expand("Organization/ParentOrganization") .Expand("Educator/Person/Agent") .Expand("Student/Person/Agent") .Expand("Student") .Expand("Grade") .Expand("ProblemBehavior") .Expand("Location") .Expand("Motivation") .Expand("AdminDecision") .Expand("OthersInvolved") where r.DateCreated >= coupledays && r.DateDeleted == null select r); Any suggestions or tips would be greatly associated, for fixing my current implementation or in developing a new one, with the caveat that the database can't be changed and that ultimately I need to expose a large portion of the database via a web service that limits data access to the data authorized for, for the purpose of data integration with multiple outside parties. THANK YOU!!!

    Read the article

  • Transactions in codeigniter with multiple tables.

    - by Ethan
    Hey SO, I'm new to transactions in general, but especially with CodeIgniter. I'm using InnoDB and everything, but my transactions aren't rolling back when I want them to. Here's my code (slightly simplified). $dog_db = $this->load->database('dog', true); $dog_db->trans_begin(); $dog_id = $this->dogs->insert($new_dog); //Gets primary key of insert if(!$dog_id) { $dog_db->trans_rollback(); throw new Exception('We have had an error trying to add this dog. Please go back and try again.'); } $new_review['dog_id'] = $dog_id; $new_review['user_id'] = $user_id; $new_review['date_added'] = time(); if(!$this->reviews->insert($new_review)) //If the insert fails { $dog_db->trans_rollback(); throw new Exception('We have had an error trying to add this dog. Please go back and try again.'); } //ADD DESCRIPTION $new_description['description'] = $add_dog['description']; $new_description['dog_id'] = $dog_id; $new_description['user_id'] = $user_id; $new_description['date_added'] = time(); if(!$this->descriptions->insert($new_description)) { $dog_db->trans_rollback(); throw new Exception('We have had an error trying to add this dog. Please go back and try again.'); } $booze_db->trans_rollback(); //THIS IS JUST TO SEE IF IT WORKS throw new Exception('We have had an error trying to add this dog. Please go back and try again.'); $booze_db->trans_commit(); } catch(Exception $e) { echo $e->getMessage(); } I'm not getting any error messages, but it's not rolling back either. It should roll back at that final trans_rollback right before the commit. My models are all on the "dog" database, so I think that the transaction would carry into the models' functions. Maybe you just can't use models like this. Any help would be greatly appreciated! Thanks!

    Read the article

  • subscription in reporting services

    - by shoaib
    I want to subscribe report on specific schedule in reporting services 2008. i.e report will dilever to user automatically on schedule. I am using visual studio 2008. I have done the configuration setting (rsreportserver.config, app.config after adding refrences of asmx files) by refrence msdn. The code is running fine (no exception occur) and I also get subscription id through calling create subscription indicate all going fine. But after running the code no entry made in Subscription table of ReportServer database. And also not get any mail. While through report server web tool, I can get email and also entery made in database but not from coe. Please someone help me. What I am missing. Plz help Code is given follow: (Keep in mind, I am using VS2008) void SendReportEmail() { RSServiceReference.ReportingService2005SoapClient rs=new RSServiceReference.ReportingService2005SoapClient(); rs.ClientCredentials.Windows.AllowedImpersonationLevel = new System.Security.Principal.TokenImpersonationLevel(); string batchID = string.Empty; RSServiceReference.ServerInfoHeader infoHeader = rs.CreateBatch(out batchID); BatchHeader bh = new BatchHeader() { BatchID = batchID, AnyAttr = infoHeader.AnyAttr }; string report = "/PCMSR6Reports/PaymentRequestStatusMIS"; string desc = "Send email from code to [email protected]"; string eventType = "TimedSubscription"; string scheduleXml="<ScheduleDefinition xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"><StartDateTime xmlns=\"http://schemas.microsoft.com/sqlserver/2006/03/15/reporting/reportingservices\">2010-03-06T15:15:00.000+05:00</StartDateTime></ScheduleDefinition>"; RSServiceReference.ParameterValue[] extensionParams = new RSServiceReference.ParameterValue[7]; extensionParams[0] = new RSServiceReference.ParameterValue(); extensionParams[0].Name = "TO"; extensionParams[0].Value = "[email protected]"; extensionParams[1] = new RSServiceReference.ParameterValue(); extensionParams[1].Name = "IncludeReport"; extensionParams[1].Value = "True"; extensionParams[2] = new RSServiceReference.ParameterValue(); extensionParams[2].Name = "RenderFormat"; extensionParams[2].Value = "MHTML"; extensionParams[3] = new RSServiceReference.ParameterValue(); extensionParams[3].Name = "Subject"; extensionParams[3].Value = "@ReportName was executed at @ExecutionTime"; extensionParams[4] = new RSServiceReference.ParameterValue(); extensionParams[4].Name = "Comment"; extensionParams[4].Value = "Here is your test report for testing purpose"; extensionParams[5] = new RSServiceReference.ParameterValue(); extensionParams[5].Name = "IncludeLink"; extensionParams[5].Value = "True"; extensionParams[6] = new RSServiceReference.ParameterValue(); extensionParams[6].Name = "Priority"; extensionParams[6].Value = "NORMAL"; RSServiceReference.ParameterValue[] parameters = new RSServiceReference.ParameterValue[10]; parameters[0] = new RSServiceReference.ParameterValue(); parameters[0].Name = "BranchId"; parameters[0].Value = "1"; parameters[1] = new RSServiceReference.ParameterValue(); parameters[1].Name = "UserName"; parameters[1].Value = "admin"; parameters[2] = new RSServiceReference.ParameterValue(); parameters[2].Name = "SupplierId"; parameters[2].Value = "0"; string matchData = scheduleXml; RSServiceReference.ExtensionSettings extSettings = new RSServiceReference.ExtensionSettings(); extSettings.ParameterValues = extensionParams; extSettings.Extension = "Report Server Email"; try { string sub=""; RSServiceReference.ServerInfoHeader SubID = rs.CreateSubscription(bh, report, extSettings, desc, eventType, matchData, parameters, out sub); rs.FireEvent(bh, "TimedSubscription", sub); } catch (Exception e) { Console.WriteLine(e); } } Detail response will be highly appricated.

    Read the article

  • What is in your Mathematica tool bag?

    - by Timo
    We all know that Mathematica is great, but it also often lacks critical functionality. What kind of external packages / tools / resources do you use with Mathematica? I'll edit (and invite anyone else to do so too) this main post to include resources which are focused on general applicability in scientific research and which as many people as possible will find useful. Feel free to contribute anything, even small code snippets (as I did below for a timing routine). Also, undocumented and useful features in Mathematica 7 and beyond you found yourself, or dug up from some paper/site are most welcome. Please include a short description or comment on why something is great or what utility it provides. If you link to books on Amazon with affiliate links please mention it, e.g., by putting your name after the link. Packages: LevelScheme is a package that greatly expands Mathematica's capability to produce good looking plots. I use it if not for anything else then for the much, much improved control over frame/axes ticks. David Park's Presentation Package ($50 - no charge for updates) Tools: MASH is Daniel Reeves's excellent perl script essentially providing scripting support for Mathematica 7. (This is finally built in as of Mathematica 8 with the -script option.) Resources: Wolfram's own repository MathSource has a lot of useful if narrow notebooks for various applications. Also check out the other sections such as Current Documentation, Courseware for lectures, and Demos for, well, demos. Books: Mathematica programming: an advanced introduction by Leonid Shifrin (web, pdf) is a must read if you want to do anything more than For loops in Mathematica. Quantum Methods with Mathematica by James F. Feagin (amazon) The Mathematica Book by Stephen Wolfram (amazon) (web) Schaum's Outline (amazon) Mathematica in Action by Stan Wagon (amazon) - 600 pages of neat examples and goes up to Mathematica version 7. Visualization techniques are especially good, you can see some of them on the author's Demonstrations Page. Mathematica Programming Fundamentals by Richard Gaylord (pdf) - A good concise introduction to most of what you need to know about Mathematica programming. Undocumented (or scarcely documented) Features: How to customize Mathematica keyboard shortcuts. See this question. How to inspect patterns and functions used by Mathematica's own functions. See this answer How to achieve Consistent size for GraphPlots in Mathematica? See this question.

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >