Search Results

Search found 3474 results on 139 pages for 'prepared statements'.

Page 104/139 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Property overwrite behaviour

    - by jeremyj
    I thought it worth sharing about property overwrite behaviour because i found it confusing at first in the hope of preventing some learning pain for the uninitiated with MSBuild :-)The confusion for me came because of the redundancy of using a Condition statement in a _project_ level property to test that a property has not been previously set. What i mean is that the following two statements are always identical in behaviour, regardless if the property has been supplied on the command line -  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>has the same behaviour regardless of command line override as -  <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>  i.e. the two above property declarations have the same result whether the property is overridden on the command line or not.To prove this experiment with the following .proj file -<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" >  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>  <Target Name="Target1">    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target2">    <PropertyGroup>      <PropA>PropA set in Target2</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target3">    <PropertyGroup>      <PropA Condition=" '$(PropA)' == '' ">PropA set in Target3</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target4">    <PropertyGroup>      <PropA Condition=" '$(PropA)' != '' ">PropA set in Target4</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target></Project>Try invoking it using both of the following invocations and observe its output -1)>msbuild blog.proj /t:Target1;Target2;Target3;Target42)>msbuild blog.proj /t:Target1;Target2;Target3;Target4 "/p:PropA=PropA set on command line"Then try those two invocations with the following three variations of specifying PropA at the project level -1)  <PropertyGroup>     <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>   </PropertyGroup> 2)   <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>3)  <PropertyGroup>     <PropA Condition=" '$(PropA)' != '' ">PropA set at project level</PropA>   </PropertyGroup>

    Read the article

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • SQL Server: Writing CASE expressions properly when NULLs are involved

    - by Mladen Prajdic
    We’ve all written a CASE expression (yes, it’s an expression and not a statement) or two every now and then. But did you know there are actually 2 formats you can write the CASE expression in? This actually bit me when I was trying to add some new functionality to an old stored procedure. In some rare cases the stored procedure just didn’t work correctly. After a quick look it turned out to be a CASE expression problem when dealing with NULLS. In the first format we make simple “equals to” comparisons to a value: SELECT CASE <value> WHEN <equals this value> THEN <return this> WHEN <equals this value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Second format is much more flexible since it allows for complex conditions. USE THIS ONE! SELECT CASE WHEN <value> <compared to> <value> THEN <return this> WHEN <value> <compared to> <value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Now that we know both formats and you know which to use (the second one if that hasn’t been clear enough) here’s an example how the first format WILL make your evaluation logic WRONG. Run the following code for different values of @i. Just comment out any 2 out of 3 “SELECT @i =” statements. DECLARE @i INTSELECT  @i = -1 -- first resultSELECT  @i = 55 -- second resultSELECT  @i = NULL -- third resultSELECT @i AS OriginalValue, -- first CASE format. DON'T USE THIS! CASE @i WHEN -1 THEN '-1' WHEN NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS DontUseThisCaseFormatValue, -- second CASE format. USE THIS! CASE WHEN @i = -1 THEN '-1' WHEN @i IS NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS UseThisCaseFormatValue When the value of @i is –1 everything works as expected, since both formats go into the –1 WHEN branch. When the value of @i is 55 everything again works as expected, since both formats go into the ELSE branch. When the value of @i is NULL the problems become evident. The first format doesn’t go into the WHEN NULL branch because it makes an equality comparison between two NULLs. Because a NULL is an unknown value: NULL = NULL is false. That is why the first format goes into the ELSE Branch but the second format correctly handles the proper IS NULL comparison.   Please use the second more explicit format. Your future self will be very grateful to you when he doesn’t have to discover these kinds of bugs.

    Read the article

  • Is changing my job now a wise decision? [closed]

    - by FlaminPhoenix
    First a little background about myself. I am a javascript programmer with 3.8 years of experience. I joined my current company a year and 3 months ago, and I was recruited as a javascript programmer. I was under the impression I was a programmer in a programming team but this was not the case. No one else except me and my manager knows anything about programming in my team. The other two teammates, copy paste stuff from websites into excel sheets. I was told I was being recruited for a new project, and it was true. The only problem was that the server side language they were using was PHP. They were using a popular library with PHP, and I had never worked with PHP before. Nevertheless, I learnt it well enough to get things working, and received high praise from my boss's boss on whichever project I worked on. Words like "wow" , "This looks great, the clients gonna be impressed with this." were sprinkled every now and then on reviewing my work. They even managed to sell my work to a couple of clients and as I understand, both of my projects are going to fetch them a pretty buck. The problem: I was asked to move into a project which my manager was handling. I asked them for training on the project which never came, and sure enough I couldnt complete my first task on the new project without shortcomings. I told my manager there were things I didnt know how to get done in the new project due to lack of training. His project had 0 documentation. I was told he would "take care" of everything relating to those shortcomings. In the meantime, I was asked to switch to another project. My manager made the necessary changes and later told me that the build had "broken" on the production server and that I needed to "test" my changes before saying things were done. I never deployed it on the production server. He did. I never saw / had the opportunity to see the final build before it went to production. He called me for a separate meeting and started pointing fingers at me, but I took full responsibility even if I didnt have to. He later on got on a call with his boss, in my presence, and gave him the impression that it was all my fault. I did not confront him about this so far. I have worked late / done overtime without them asking a lot, but last week, I just got home from work, and I got calls asking me to solve an issue which till then they had kept quiet about even though they were informed about it. I asked my manager why I hadnt been tasked with this when I was in office. He started telling me which statements to put where, as if to mock me, and that this "is hardly an overtime issue" and this pissed me off. Also, during the previous meeting, he was constantly talking highly about his work, at the same time trying to demean mine. In the meantime, I have attended an interview with another MNC, and the interviewers there were fully respectful of my decision to leave my current company. Its a software company, so I can expect my colleagues to know a lot more than me. Im told I can expect their offer anytime this week. My questions: Is my anger towards my manager justified? While leaving, do I tell him that its because of his actions that Im leaving? Do I erupt in anger and tell him that he shouldnt have put the blame on me since he was the one doing the deployment? This is going to be my second resignation to this company. The first time I wanted to resign, I was asked to stay back and my manager promised a lot of changes, a couple of which were made. How do I keep myself from getting into such situations with my employers in the future?

    Read the article

  • HTTP 500 Internal Server Error on IIS 7.5 with MVC3

    - by Tor Haugen
    I am trying to install an MVC3 application on our production server with no luck. The application is from a 3rd party (compiled), and so debugging is not available to me. Besides, I strongly suspect the error occurs before any code in the site has a chance to execute. Our staging server is - as far as I can determine - set up excactly like the production server. Both run Windows Server 2008 Standard R2, both also run a Sharepoint 2010 site (though this install doesn't touch that in any way). IIS is version 7.5, and .NET Framework 4.0 (required by the MVC app) is (recently) installed (by me, with a reboot after). The application is very small and simple and, as far as I can tell sticks to fairly standard functionality - including forms authentication (ie. it doesnt' pull any dirty tricks). The error message shown in the browser is very general: HTTP Error 500.0 - Internal Server Error An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The bit about 'An error message detailing the cause' being in the application event log seems to be just speculation - a pious hope that whatever code actually caused the error will log it. Nothing useful is to be found in the event log (only the very same message, logged by IIS). Module: AspNetInitClrHostFailureModule Notification: BeginRequest Handler: StaticFile Error Code: 0x80070002 Requested URL: http://xxxxxx.xxxxxx.xx:80/ Physical Path: C:\Xxxxxxx\Prod\WebClient Logon Method: Not yet determined Logon User: Not yet determined Using Failed Request Tracing, I have been able to track the error (as also indicated above) to the AspNetInitClrHostFailureModule: 103. -NOTIFY_MODULE_START ModuleName AspNetInitClrHostFailureModule Notification 1 fIsPostNotification false Notification BEGIN_REQUEST 104. -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 105. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) So there you have it. Seemingly, the AspNetInitClrHostFailureModule fails to find some file. So some questions are: What is the AspNetInitClrHostFailureModule? It is not listed in the fairly exhausting list of modules configurable in IIS manager for the site. I have had no success googling it either. Maybe it's secret.. I access the root URL of the site. This is supposed to be redirected to /Account/LogOn by the FormsAuthenticationModule. Why then is the handler StaticFile? Is that a clue? I have tried removing the infamous system.webserver/modules/runAllManagedModulesForAllRequests attribute, and that makes the error go away (but MVC not actually working, of course). I am prepared to specify all necessary modules manually if that's what it takes, but if the AspNetInitClrHostFailureModule is actually needed, I will be just as stuck. Does anyone know, or can anyone direct me to someone who knows, exactly what modules a typical MVC3 application actually needs? This question might well be a duplicate of this one, but he didn't get any useful answer, and also asked less specific questions. So I'll have my own go. Hoping for some help here :) Edit: I have now tried setting up a trivial MVC 3 project on the server. I created a new project using the MVC Application template, compiled it and deployed it to the server. It behaves in exactly the same way. The server simply cannot run MVC 3 projects.

    Read the article

  • Hudson on debian lenny

    - by Laurent
    Hello, I installed Hudson deamon on one server (running on debian lenny testing) some time ago. All was working until I perform an upgrade. At this time Hudson isn't accessible at port 8080 (which is the default port used). I have looked for iptables problems, however port 8080 is open in INPUT and OUTPUT. Configuration file in /etc/default/hudson seems okay, I haven't touch it. And if I do a ps aux | grep hudson, hudson deamon is running. Update 1: What is really strange for me is that in /var/log/hudson/hudson.log I get no error : [Winstone 2010/02/10 17:10:04] - Control thread shutdown successfully [Winstone 2010/02/10 17:10:04] - Winstone shutdown successfully Running from: /usr/share/hudson/hudson.war [Winstone 2010/02/10 17:10:43] - Beginning extraction from war file hudson home directory: /var/lib/hudson [Winstone 2010/02/10 17:10:44] - HTTP Listener started: port=8080 [Winstone 2010/02/10 17:10:44] - AJP13 Listener started: port=8009 [Winstone 2010/02/10 17:10:44] - Winstone Servlet Engine v0.9.10 running: controlPort=disabled 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Started initialization 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Listed all plugins 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Prepared all plugins 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Started all plugins 10 févr. 2010 17:10:46 hudson.model.Hudson$4 onAttained INFO: Loaded all jobs 10 févr. 2010 17:10:46 hudson.model.Hudson$4 onAttained INFO: Completed initialization 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@caa559d: display name [Root WebApplicationContext]; startup date [Wed Feb 10 17:10:47 CET 2010]; root of context hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@caa559d]: org.springframework.beans.factory.support.DefaultListableBeanFactory@40d2f5f1 10 févr. 2010 17:10:47 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@40d2f5f1: defining beans [daoAuthenticationProvider,authenticationManager,userDetailsService]; root of factory hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@4d88a387: display name [Root WebApplicationContext]; startup date [Wed Feb 10 17:10:47 CET 2010]; root of context hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@4d88a387]: org.springframework.beans.factory.support.DefaultListableBeanFactory@6153e0c0 10 févr. 2010 17:10:47 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@6153e0c0: defining beans [filter,legacy]; root of factory hierarchy 10 févr. 2010 17:10:47 hudson.TcpSlaveAgentListener <init> INFO: JNLP slave agent listener started on TCP port 59750 Update 2: What I get with lsof -i -n -P | grep hudson: java 28985 hudson 97u IPv6 2002707 0t0 TCP *:8080 (LISTEN) java 28985 hudson 99u IPv6 2002708 0t0 TCP *:8009 (LISTEN) java 28985 hudson 147u IPv6 2002711 0t0 TCP *:59750 (LISTEN) java 28985 hudson 150u IPv6 2002712 0t0 UDP *:33848 I don't know what I can verify. Does someone has an idea in order to help me to resolve this problem ?

    Read the article

  • How do I achieve lossless JPEG joining without truncation of partial MCUs?

    - by Karan
    I am working on a project for which I need to join thousands of JPEG images losslessly (I'm not talking about the Lossless JPEG/JPEG 2000/JPEG-LS formats here). Aforementioned images have varying levels of chroma subsampling (1x1, 1x2, 2x1, 2x2), resulting in varying MCU sizes (8x8, 8x16, 16x8, 16x16 px). However, in any given set of images to be joined together, each image has identical characteristics. For now, let's assume I only have 2 images. Image #1 (I1) is 256x256px in size and #2 (I2) is 239x256px in size. 2x2 subsampling is used such that MCU size is 16x16px. I2 thus obviously has partial MCUs at the right edge, since its width is not evenly divisible by 16. (I've read that so-called 'partial' MCUs actually contain the data for a complete MCU, but the image dimensions instruct the renderer to only display the relevant pixels and ignore/hide the extra ones.) Looking around for tools that could help me accomplish this, I came across a modified version of JpegTran, that contains an experimental lossless crop 'n' drop (cut & paste) feature. All the other apps I encountered that support lossless JPEG editing seem to utilise IJG's (JpegTran) code, so this seemed to be the logical choice. Also, given the sheer number of images, I wanted something that could preferably be run from the command-line so that I could automate the process with a script. Unfortunately, while everything else worked fine, it seems JpegTran truncates the partial MCUs instead of retaining them. Thus in the example above, the final joined image contains all of I1, but only 224x256px of I2. Why 224? because 239 = 14x16+15, which means there are 14 full MCUs along the width, and 1 partial MCU (just 1px short of the complete 16px). The last 15px is what is getting blanked, leading to a 495x256px image with 15px of blank (grey) pixels at the right edge. See images below (shame that imgur re-compresses them): (left )+ (right) = As you can clearly see, the red portion (15px) of I2 has been truncated by JpegTran. If the MCUs were 8px in width, the lost portion would have been the right-most 7px of I2. Similarly, joining I3 (256x239px) *below * I1 would cause the loss of 7 or 15px, depending on the MCU height of course: (top) + (bottom) = If this is better suited to some other StackExchange (or even non-SE) site/forum where JPEG/image encoding experts hang out, do let me know. Can what I am attempting even be done, or is the so-called 'lossless' JPEG crop 'n' drop only valid for images with no partial MCUs? (Maybe that is why the feature is still in an "experimental state" more than a decade after being introduced...) Until I know for sure that it is impossible, I am not interested in suggestions for lossy joining. Avoiding any generational loss whatsoever is the sole reason why I'm breaking my head over this, else I'd have had this done and dusted ages ago. Also, I am not interested in suggestions related to switching image formats. I do not control the source of the images. If it can be done, how? Please keep in mind that any alternate apps suggested must ideally be capable of automation, given the requirements stated above. (But given how it's unlikely I'm even going to receive a useful answer given the constraints, I would be happy with any app suggestion just as long as it actually works. I can always look into an AutoIT/AHK script or something later to automate it.) I understand that an odd-sized final image might cause issues, so I am fully prepared to accept any solution, even if it results in blank (preferably black) padding pixels to the right/bottom. What I mean is, I don't care if I1 + I2 is 496x256px (1px padding) or even 512x256px (17px padding) in size, as long as the final image contains all the actual image data from both source images, and the entire process is lossless. Obviously the lesser the padding (if any), the better, but at this point any solution will do. A Windows-based solution would be perfect, but a Linux-based one would be entirely acceptable.

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • UIViewController presentModalViewController: animated: doing nothing?

    - by ryyst
    Hi, I recently started a project, using Apple's Utility Application example project. In the example project, there's an info button that shows an instance of FlipSideView. If you know the Weather.app, you know what the button acts like. I then changed the MainWindow.xib to contain a scrollview in the middle of the window and a page-control view at the bottom of the window (again, like the Weather.app). The scrollview gets filled with instances of MainView. When I then clicked the info button, the FlipSideView would show, but only in the area that was previously filled by the MainView instance – this means that the page-control view on the bottom of the page still showed when the FlipSideView instance got loaded. So, I thought that I would simply add a UIViewController for the top-most window, which is the one declared inside the AppDelegate created along side with the project. So, I created a subclass of UIViewController, put an instance of it inside MainWindow.xib and connected it's view outlet to the UIWindow declared as window inside the app delegate. I also changed the button's action, so that it know sends a message to the MainWindowController instance. The message does get sent (I checked with NSLog() statements), but the FlipSideView doesn't get shown. Here's the relevant (?) code: FlipsideViewController *controller = [[FlipsideViewController alloc] initWithNibName:@"FlipsideView" bundle:nil]; controller.delegate = self; controller.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:controller animated:YES]; [controller release]; Why's this not working? I've uploaded the entire project here for you to be able to see the whole thing. Thanks for help! -- Ry

    Read the article

  • managing images in an iphone/ipad universal app

    - by taber
    Hi, I'm just curious as to what methods people are using to dynamically use larger or smaller images in their universal iPhone/iPad apps. I created a large test image and I tried scaling down (using cocos2d) by 0.46875. After viewing that in the iPhone 4.0 simulator I found the results were pretty crappy... rough pixel edges, etc. Plus, loading huge image files for iPhone users when they don't need them is pretty lame. So I guess what I will probably have to do is save out two versions of every sprite... large (for the iPad side) and small (for iPhone/iPod Touch) then detect the user's device and spit out the proper sprite like so: NSString *deviceType = [UIDevice currentDevice].model; CCSprite *test; if([deviceType isEqualToString:@"iPad"]) { test = [CCSprite spriteWithFile:@"testBigHuge.png"]; } else { test = [CCSprite spriteWithFile:@"testRegularMcTiny.png"]; } [self addChild: test]; How are you guys doing this? I'd rather avoid sprinkling all of my code with if statements like this. I want to also avoid using .xib files since it's an OpenGL-based app. Thanks!

    Read the article

  • ZF2 Zend\Db Insert/Update Using Mysql Expression (Zend\Db\Sql\Expression?)

    - by Aleross
    Is there any way to include MySQL expressions like NOW() in the current build of ZF2 (2.0.0beta4) through Zend\Db and/or TableGateway insert()/update() statements? Here is a related post on the mailing list, though it hasn't been answered: http://zend-framework-community.634137.n4.nabble.com/Zend-Db-Expr-and-Transactions-in-ZF2-Db-td4472944.html It appears that ZF1 used something like: $data = array( 'update_time' => new \Zend\Db\Expr("NOW()") ); And after looking through Zend\Db\Sql\Expression I tried: $data = array( 'update_time' => new \Zend\Db\Sql\Expression("NOW()"), ); But get the following error: Catchable fatal error: Object of class Zend\Db\Sql\Expression could not be converted to string in /var/www/vendor/ZF2/library/Zend/Db/Adapter/Driver/Pdo/Statement.php on line 256 As recommended here: http://zend-framework-community.634137.n4.nabble.com/ZF2-beta3-Zend-Db-Sql-Insert-new-Expression-now-td4536197.html I'm currently just setting the date with PHP code, but I'd rather use the MySQL server as the single source of truth for date/times. $data = array( 'update_time' => date('Y-m-d H:i:s'), ); Thanks, I appreciate any input!

    Read the article

  • android.intent.action.SCREEN_ON doesn't work as a receiver intent filter

    - by Jim Blackler
    I'm trying to get a BroadcastReceiver invoked when the screen is turned on. In my AndroidManifest.xml I have specified : <receiver android:name="IntentReceiver"> <intent-filter> <action android:name="android.intent.action.SCREEN_ON"></action> </intent-filter> </receiver> However it seems the receiver is never invoked (breakpoints don't fire, log statements ignored). I've swapped out SCREEN_ON for BOOT_COMPLETED for a test, and this does get invoked. This is in a 1.6 (SDK level 4) project. A Google Code Search revealed this, I downloaded the project and synced it, converted it to work with latest tools, but it too is not able to intercept that event. http://www.google.com/codesearch/p?hl=en#_8L9bayv7qE/trunk/phxandroid-intent-query/AndroidManifest.xml&q=android.intent.action.SCREEN_ON Is this perhaps no longer supported? Previously I have been able to intercept this event successfully with a call to Context.registerReceiver() like so registerReceiver(new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { // ... } }, new IntentFilter(Intent.ACTION_SCREEN_ON)); However this was performed by a long-living Service. Following sage advice from CommonsWare I have elected to try to remove the long-living Service and use different techniques. But I still need to detect the screen off and on events.

    Read the article

  • Calling C# object method from IronPython

    - by Jason
    I'm trying to embed a scripting engine in my game. Since I'm writing it in C#, I figured IronPython would be a great fit, but the examples I've been able to find all focus on calling IronPython methods in C# instead of C# methods in IronPython scripts. To complicate things, I'm using Visual Studio 2010 RC1 on Windows 7 64 bit. IronRuby works like I expect it would, but I'm not very familiar with Ruby or Python syntax. What I'm doing: ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = engine.CreateScope(); //Test class with a method that prints to the screen. scope.SetVariable("test", this); ScriptSource source = engine.CreateScriptSourceFromString("test.SayHello()", Microsoft.Scripting.SourceCodeKind.Statements); source.Execute(scope); This generates an error, "'TestClass' object has no attribute 'SayHello'" This exact set up works fine with IronRuby though using "self.test.SayHello()" I'm wary using IronRuby though because it doesn't appear as mature as IronPython. If it's close enough, I might go with that though. Any ideas? I know this has to be something simple.

    Read the article

  • Unable to load UIView with initWithNibName in Apple SDK 3.1.3

    - by James Foster
    I am trying to load my UIViewController and corresponding UIView programmatically in the AppDelegate class. I have the following in the applicationDidFinishLaunchingMethod of the AppDelegate class: (void)applicationDidFinishLaunching:(UIApplication *)application { NSLog(@"--- AppDelegate applicationDidFinishLaunching Start"); // Override point for customization after application launch //MainController *controller = [[MainController alloc] initWithNibName:@"MainView" bundle:nil]; MainController2 *controller = [[MainController2 alloc] initWithNibName:@"MainView2" bundle:nil]; if (controller.view == nil) { NSLog(@"--- controller view is nil!!!!!!"); } [window addSubview:controller.view]; [window makeKeyAndVisible]; NSLog(@"--- AppDelegate applicationDidFinishLaunching End"); } Basically the view in the viewController doesn't load and when the application launches, it just shows the blank window. What is funny is that it worked before and then just stopped working. I am wondering if this is a bug in iPhone SDK 3.1.3??? This is a really annoying issue, and I was quite a ways along in a new project when I started having this problem and had to start over with a blank project and copy over all of my resources, when it started happening again... I have uninstalled iPhone OS 3.1.3 and reinstalled and the problem prevails... I also created a second UIViewController class and corresponding nib which DOES LOAD just fine... I am not sure why one works and the other doesn't it... You can download a sample project which demonstrates this issue at the following link: http://www.mediafire.com/?nmhnmhbeyki To switch back and forth between the working/nonworking UIViewController and UIView simply comment comment/comment out the initWithNibLine lines in the AppDelegate and the corresponding #import "MainController.h" statements in the appdelegate.h file... Any ideas??? The sample project I have linked to isolates the problem in as few files/lines of code as possible... I appreciate any help you might be able to provide. Thanks, James

    Read the article

  • Telerik RADGrid - linq and updating

    - by Dave
    Hi Telerik's RADGrid, basing on their example on http://demos.telerik.com/aspnet-ajax/grid/examples/dataediting/programaticlinqupdates/defaultcs.aspx Problem: I can insert and delete, however updating doesn't work. No error trapped. Data just doesn't change. From the code below it looks like Telerik Grid is doing some kung-fu behind the scenes to wire things up. I can't see the db receiving any update statements. Question: anything obvious I'm missing? protected void RadGrid1_UpdateCommand(object source, GridCommandEventArgs e) { var editableItem = ((GridEditableItem) e.Item); var raceId = (Guid) editableItem.GetDataKeyValue("RaceID"); //retrive entity form the Db var race = DbContext.races.Where(n => n.raceid == raceId).FirstOrDefault(); if (race != null) { //update entity's state editableItem.UpdateValues(race); try { //submit chanages to Db DbContext.SubmitChanges(); } catch (Exception f) { ShowErrorMessage(f); } } } Think I may have to go back to their example.. get their db.. and attack from that point of view. Cheers!

    Read the article

  • SSRS 2008 and SSAS 2008 transport error

    - by dan english
    I am testing an upgrade to SSAS 2008 and verifying existing reports working properly. I am able to get some SSRS reports that are using SSAS as a datasource to run without any issues. They are simple and only have a single dataset. The reports that I am unable to get to work correctly against SSAS 2008 have multiple datasets and have a fitler setup with a data range setup as a parameter. As soon as I setup that filter as a parameter and deploy them the report returns a "The connection either timed out or was lost. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host" message. The funny thing is that the report works fine when I run it locally in BIDS and it works fine once deployed if I point it to a SSAS 2005 server. Once I point it to the SSAS 2008 server it fails. I can get other reports to work fine, but not the ones with this type of a filter setup. I can see that the start and end date parameter MDX statements get run in the trace, but that is it. After those run then we receive the transport connection message. Another funny thing is that in the production environment the reports are working fine, but that has SSRS 2005 and SSAS 2008. Does this make sense? What could be causing this? I have tried setting the single transaction level on the datasource too, but that does not seem to make a difference.

    Read the article

  • Read multiple tables from dataset in Powershell

    - by Lucas
    I am using a function that collects data from a SQL server: function Invoke-SQLCommand { param( [string] $dataSource = "myserver", [string] $dbName = "mydatabase", [string] $sqlCommand = $(throw "Please specify a query.") ) $SqlConnection = New-Object System.Data.SqlClient.SqlConnection $SqlConnection.ConnectionString = "Server=$dataSource;Database=$dbName;Integrated Security=True" $SqlCmd = New-Object System.Data.SqlClient.SqlCommand $SqlCmd.CommandText = $sqlCommand $SqlCmd.Connection = $SqlConnection $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter $SqlAdapter.SelectCommand = $SqlCmd $DataSet = New-Object System.Data.DataSet $SqlAdapter.Fill($DataSet) $SqlConnection.Close() $DataSet.Tables[0] } It works great but returns only one table. I am passing several Select statements, so the dataset contains multiple tables. I replaced $DataSet.Tables[0] with for ($i=0;$i -lt $DataSet.tables.count;$i++){ $Dataset.Tables[$i] } but the console only shows the content of the first table and blank lines for each records of what should be the second table. The only way to see the result is to change the code to $Dataset.Tables[$i] | out-string but I do not want strings, I want to have table objects to work with. When I assign what is returned by the Invoke-SQLCommand to a variable, I can see that I have an array of datarow objects but only from the first table. What happened to the second table? Any help would be greatly appreciated. Thanks

    Read the article

  • Error Handling in T-SQL Scalar Function

    - by hydroparadise
    Ok.. this question could easily take multiple paths, so I will hit the more specific path first. While working with SQL Server 2005, I'm trying to create a scalar funtion that acts as a 'TryCast' from varchar to int. Where I encounter a problem is when I add a TRY block in the function; CREATE FUNCTION u_TryCastInt ( @Value as VARCHAR(MAX) ) RETURNS Int AS BEGIN DECLARE @Output AS Int BEGIN TRY SET @Output = CONVERT(Int, @Value) END TRY BEGIN CATCH SET @Output = 0 END CATCH RETURN @Output END Turns out theres all sorts of things wrong with this statement including "Invalid use of side-effecting or time-dependent operator in 'BEGIN TRY' within a function" and "Invalid use of side-effecting or time-dependent operator in 'END TRY' within a function". I can't seem to find any examples of using try statements within a scalar function, which got me thinking, is error handling in a function is possible? The goal here is to make a robust version of the Convert or Cast functions to allow a SELECT statement carry through depsite conversion errors. For example, take the following; CREATE TABLE tblTest ( f1 VARCHAR(50) ) GO INSERT INTO tblTest(f1) VALUES('1') INSERT INTO tblTest(f1) VALUES('2') INSERT INTO tblTest(f1) VALUES('3') INSERT INTO tblTest(f1) VALUES('f') INSERT INTO tblTest(f1) VALUES('5') INSERT INTO tblTest(f1) VALUES('1.1') SELECT CONVERT(int,f1) AS f1_num FROM tblTest DROP TABLE tblTest It never reaches point of dropping the table because the execution gets hung on trying to convert 'f' to an integer. I want to be able to do something like this; SELECT u_TryCastInt(f1) AS f1_num FROM tblTest fi_num __________ 1 2 3 0 5 0 Any thoughts on this? Is there anything that exists that handles this? Also, I would like to try and expand the conversation to support SQL Server 2000 since Try blocks are not an option in that scenario. Thanks in advance.

    Read the article

  • Replacing instructions in a method's MethodBody

    - by Alix
    Hi, (First of all, this is a very lengthy post, but don't worry: I've already implemented all of it, I'm just asking your opinion.) I'm having trouble implementing the following; I'd appreciate some help: I get a Type as parameter. I define a subclass using reflection. Notice that I don't intend to modify the original type, but create a new one. I create a property per field of the original class, like so: public class OriginalClass { private int x; } public class Subclass : OriginalClass { private int x; public int X { get { return x; } set { x = value; } } } For every method of the superclass, I create an analogous method in the subclass. The method's body must be the same except that I replace the instructions ldfld x with callvirt this.get_X, that is, instead of reading from the field directly I call the get accessor. I'm having trouble with step 4. I know you're not supposed to manipulate code like this, but I really need to. Here's what I've tried: Attempt #1: Use Mono.Cecil. This would allow me to parse the body of the method into human-readable Instructions, and easily replace instructions. However, the original type isn't in a .dll file, so I can't find a way to load it with Mono.Cecil. Writing the type to a .dll, then load it, then modify it and write the new type to disk (which I think is the way you create a type with Mono.Cecil), and then load it seems like a huge overhead. Attempt #2: Use Mono.Reflection. This would also allow me to parse the body into Instructions, but then I have no support for replacing instructions. I've implemented a very ugly and inefficient solution using Mono.Reflection, but it doesn't yet support methods that contain try-catch statements (although I guess I can implement this) and I'm concerned that there may be other scenarios in which it won't work, since I'm using the ILGenerator in a somewhat unusual way. Also, it's very ugly ;). Here's what I've done: private void TransformMethod(MethodInfo methodInfo) { // Create a method with the same signature. ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); // Declare the same local variables as in the original method. IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } // Get readable instructions. IList<Instruction> instructions = methodInfo.GetInstructions(); // I first need to define labels for every instruction in case I // later find a jump to that instruction. Once the instruction has // been emitted I cannot label it, so I'll need to do it in advance. // Since I'm doing a first pass on the method's body anyway, I could // instead just create labels where they are truly needed, but for // now I'm using this quick fix. Dictionary<int, Label> labels = new Dictionary<int, Label>(); foreach (Instruction instr in instructions) { labels[instr.Offset] = ilGen.DefineLabel(); } foreach (Instruction instr in instructions) { // Mark this instruction with a label, in case there's a branch // instruction that jumps here. ilGen.MarkLabel(labels[instr.Offset]); // If this is the instruction that I want to replace (ldfld x)... if (instr.OpCode == OpCodes.Ldfld) { // ...get the get accessor for the accessed field (get_X()) // (I have the accessors in a dictionary; this isn't relevant), MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; // ...instead of emitting the original instruction (ldfld x), // emit a call to the get accessor, ilGen.Emit(OpCodes.Callvirt, safeReadAccessor); // Else (it's any other instruction), reemit the instruction, unaltered. } else { Reemit(instr, ilGen, labels); } } } And here comes the horrible, horrible Reemit method: private void Reemit(Instruction instr, ILGenerator ilGen, Dictionary<int, Label> labels) { // If the instruction doesn't have an operand, emit the opcode and return. if (instr.Operand == null) { ilGen.Emit(instr.OpCode); return; } // Else (it has an operand)... // If it's a branch instruction, retrieve the corresponding label (to // which we want to jump), emit the instruction and return. if (instr.OpCode.FlowControl == FlowControl.Branch) { ilGen.Emit(instr.OpCode, labels[Int32.Parse(instr.Operand.ToString())]); return; } // Otherwise, simply emit the instruction. I need to use the right // Emit call, so I need to cast the operand to its type. Type operandType = instr.Operand.GetType(); if (typeof(byte).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (byte) instr.Operand); else if (typeof(double).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (double) instr.Operand); else if (typeof(float).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (float) instr.Operand); else if (typeof(int).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (int) instr.Operand); ... // you get the idea. This is a pretty long method, all like this. } Branch instructions are a special case because instr.Operand is SByte, but Emit expects an operand of type Label. Hence the need for the Dictionary labels. As you can see, this is pretty horrible. What's more, it doesn't work in all cases, for instance with methods that contain try-catch statements, since I haven't emitted them using methods BeginExceptionBlock, BeginCatchBlock, etc, of ILGenerator. This is getting complicated. I guess I can do it: MethodBody has a list of ExceptionHandlingClause that should contain the necessary information to do this. But I don't like this solution anyway, so I'll save this as a last-resort solution. Attempt #3: Go bare-back and just copy the byte array returned by MethodBody.GetILAsByteArray(), since I only want to replace a single instruction for another single instruction of the same size that produces the exact same result: it loads the same type of object on the stack, etc. So there won't be any labels shifting and everything should work exactly the same. I've done this, replacing specific bytes of the array and then calling MethodBuilder.CreateMethodBody(byte[], int), but I still get the same error with exceptions, and I still need to declare the local variables or I'll get an error... even when I simply copy the method's body and don't change anything. So this is more efficient but I still have to take care of the exceptions, etc. Sigh. Here's the implementation of attempt #3, in case anyone is interested: private void TransformMethod(MethodInfo methodInfo, Dictionary<string, MethodInfo[]> dataMembersSafeAccessors, ModuleBuilder moduleBuilder) { ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } byte[] rawInstructions = methodInfo.GetMethodBody().GetILAsByteArray(); IList<Instruction> instructions = methodInfo.GetInstructions(); int k = 0; foreach (Instruction instr in instructions) { if (instr.OpCode == OpCodes.Ldfld) { MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; // Copy the opcode: Callvirt. byte[] bytes = toByteArray(OpCodes.Callvirt.Value); for (int m = 0; m < OpCodes.Callvirt.Size; m++) { rawInstructions[k++] = bytes[put.Length - 1 - m]; } // Copy the operand: the accessor's metadata token. bytes = toByteArray(moduleBuilder.GetMethodToken(safeReadAccessor).Token); for (int m = instr.Size - OpCodes.Ldfld.Size - 1; m >= 0; m--) { rawInstructions[k++] = bytes[m]; } // Skip this instruction (do not replace it). } else { k += instr.Size; } } methodBuilder.CreateMethodBody(rawInstructions, rawInstructions.Length); } private static byte[] toByteArray(int intValue) { byte[] intBytes = BitConverter.GetBytes(intValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } private static byte[] toByteArray(short shortValue) { byte[] intBytes = BitConverter.GetBytes(shortValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } (I know it isn't pretty. Sorry. I put it quickly together to see if it would work.) I don't have much hope, but can anyone suggest anything better than this? Sorry about the extremely lengthy post, and thanks.

    Read the article

  • Faster way to transfer table data from linked server

    - by spender
    After much fiddling, I've managed to install the right ODBC driver and have successfully created a linked server on SQL Server 2008, by which I can access my PostgreSQL db from SQL server. I'm copying all of the data from some of the tables in the PgSQL DB into SQL Server using merge statements that take the following form: with mbRemote as ( select * from openquery(someLinkedDb,'select * from someTable') ) merge into someTable mbLocal using mbRemote on mbLocal.id=mbRemote.id when matched /*edit*/ /*clause below really speeds things up when many rows are unchanged*/ /*can you think of anything else?*/ and not (mbLocal.field1=mbRemote.field1 and mbLocal.field2=mbRemote.field2 and mbLocal.field3=mbRemote.field3 and mbLocal.field4=mbRemote.field4) /*end edit*/ then update set mbLocal.field1=mbRemote.field1, mbLocal.field2=mbRemote.field2, mbLocal.field3=mbRemote.field3, mbLocal.field4=mbRemote.field4 when not matched then insert ( id, field1, field2, field3, field4 ) values ( mbRemote.id, mbRemote.field1, mbRemote.field2, mbRemote.field3, mbRemote.field4 ) WHEN NOT MATCHED BY SOURCE then delete; After this statement completes, the local (SQL Server) copy is fully in sync with the remote (PgSQL server). A few questions about this approach: is it sane? it strikes me that an update will be run over all fields in local rows that haven't necessarily changed. The only prerequisite is that the local and remote id field match. Is there a more fine grained approach/a way of constraining the merge statment to only update rows that have actually changed?

    Read the article

  • What do these MS DTC Exceptions mean?

    - by David B
    I wrote a program to demonstrate the behavior of DTC timeouts with multiple threads. I'm getting several exceptions, seemingly at random. Are all of these simple Timeouts, or are some of them indicative of deeper problems (connection pool interactions, etc)? The Microsoft Distributed Transaction Coordinator (MS DTC) has cancelled the distributed transaction. Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements. The operation is not valid for the state of the transaction. ExecuteReader requires an open and available Connection. The connection's current state is closed. Here's the data part of the code: using (DemoDataDataContext dc1 = new DemoDataDataContext(Conn1)) using (DemoDataDataContext dc2 = new DemoDataDataContext(Conn2)) { WriteMany(dc1, 100); //generate 100 records for insert WriteMany(dc2, 100000); //generate 100,000 records for insert Console.WriteLine("{0} : {1}", Name, " generated records for insert."); using (TransactionScope ts = new TransactionScope()) { dc1.SubmitChanges(); dc2.SubmitChanges(); ts.Complete(); } }

    Read the article

  • Coding a parser for a domain specific language in Java

    - by Bruno Rothgiesser
    We want to design a simple domain specific language for writing test scripts to automatically test a XML-based interface of one of our applications. A sample test would be: Get an input XML file from network shared folder or subversion repository Import the XML file using the interface Check if the import result message was successfull Export the XML corresponding to the object that was just imported using the interface and check if it correct. If the domain specific language can be declarative and its statements look as close as my sentences in the sample above as possible, it will be awesome because people won't necessarily have to be programmers to understand/write/maintain the tests. Something like: newObject = GET FILE "http://svn/repos/template1.xml" reponseMessage = IMPORT newObject newObjectID = GET PROPERTY '/object/id/' FROM responseMessage (..) But then I'm not sure how to implement a simple parser for that languange in Java. Back in school, 10 years ago, I coded a language parser using Lex and Yacc for the C language. Maybe an approach would be to use some equivalent for Java? Or, I could give up the idea of having a declarative language and choose an XML-based language instead, which would possibly be easier to create a parser for? What approach would you recommend?

    Read the article

  • cellForRowAtIndexPath not being called on tableView reloadData

    - by BotskoNet
    I have a UITableView on one view that loads in data at the start of the application, and then later when a user enters text into a box and hits a button, I re-query the database, re-populate the original NSMutableArray that stores the data for the table. All of that is working perfectly. In some logging statements I can tell that the array has the correct information, the numberOfRowsInSection method is returning the proper count, and is being called after the reload is called. However, the cellForRowAtIndexPath is never called the second time (after the reload) and the table data is never updated. I've spent hours searching the net and I've found nothing that helps. Can anyone help? All code is at: http://github.com/botskonet/inmyspot The specific reload is being called at: http://github.com/botskonet/inmyspot/blob/master/Classes/InMySpotViewController.m Roughly Line 94 From: http://github.com/botskonet/inmyspot/blob/master/Classes/PlateFormViewController.m Roughly line 101 A bit more info: once the new data has been added to the mutablearray, if I try to start scrolling the table, it eventually dies with: "Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (29) beyond bounds (29)'" Which I assume means the table cells can't find any data in the array to match the scroll position, which seems to be because the array has the new data, but the table doesn't.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >