Search Results

Search found 11862 results on 475 pages for 'gman save the unicorns'.

Page 140/475 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Virtual Developer Conference On Demand - Register Updated Book: WebLogic 12c: Distinctive Recipes - Architecture, Development, Administration by Oracle ACE Director Frank Munz - Blog | YouTube Webcast: Migrating from GlassFish to WebLogic - Replay Reliance Commercial Finance Accelerates Time-to-Market, Improves IT Staff Productivity by 70% - Blog | Oracle Magazine Retrieving WebLogic Server Name and Port in ADF Application by Andrejus Baranovskis, Oracle Ace Director - Blog Using Oracle WebLogic 12c with NetBeans IDEOracle ACE Director Markus Eisele walks you through installing and configuring all the necessary components, and helps you get started with a simple Hello World project. Read the article. Video: Oracle A-Team ADF Mobile Persistence SampleThis video by Oracle Fusion Middleware A-Team architect Steven Davelaar demonstrates how to use the ADF Mobile Persistence Sample JDeveloper extension to generate a fully functional ADF Mobile application that reads and writes data using an ADF BC SOAP web service. Watch the video. Java ME 8 ReleaseDownload Java ME today! This release is an implementation of the Java ME 8 standards JSR 360 (CLDC 8) and JSR 361 (MEEP 8), and includes support of alignment with Java SE 8 language features and APIs, an enhanced services-enabled application platform, the ability to "right-size" the platform to address a wide range of target devices, and more. Learn more Download Java ME SDK 8It includes application development support for Oracle Java ME Embedded 8 platforms and includes plugins for NetBeans 8. See the Java ME 8 Developer Tools Documentation to learn JavaOne 2014 Early Bird RateRegister early to save $400 off the onsite price. With the release of Java 8 this year, we have exciting new sessions and an interactive demo space! NetBeans IDE 8.0 Patch UpdateThe NetBeans Team has released a patch for NetBeans IDE 8.0. Download it today to get fixes that enhance stability and performance. Java 8 Questions ForumFor any questions about this new release, please join the conversation on the Java 8 Questions Forum. Java ME 8: Getting Started with Samples and Demo CodeLearn in few steps how to get started with Java ME 8! The New Java SE 8 FeaturesJava SE 8 introduces enhancements such as lambda expressions that enable you to write more concise yet readable code, better utilize multicore systems, and detect more errors at compile time. See What's New in JDK 8 and the new Java SE 8 documentation portal. Pay Less for Java-Related Books!Save 20% on all new Oracle Press books related to Java. Download the free preview sampler for the Java 8 book written by Herbert Schildt, Maurice Naftain, Henrik Ebbers and J.F. DiMarzio. New book: EJB 3 in Action, Second Edition WebLogic 12c Does WebSockets Getting Started by C2B2 Video: Building Robots with Java Embedded Video: Nighthacking TV Watch presentations by Stephen Chin and community members about Java SE, Java Embedded, Java EE, Hadoop, Robots and more. Migrating the Spring Pet Clinic to Java EE 7 Trip report : Jozi JUG Java Day in Johannesburg How to Build GlassFish 4 from Source 4,000 posts later : The Aquarium WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Accounts in Work Items after migration to TFS 2010 and to new domain

    - by Clara Oscura
    Lately I’ve been doing some tests on migrating our TFS 2008 installation to TFS 2010, coupled with a machine and domain change. One particular topic that was tricky is user accounts. We installed first a new machine with TFS 2010 and then migrated the projects in the old server. The work items were migrated with the projects. Great, but if I try to edit one of the old work items I cannot save it anymore because some fields contain old user names (ex. OLDDOMAIN\user) which are not known in the new domain (it should be NEWDOMAIN\user). The errors look like this: When I correct the ‘Assigned To’ field value, I get another error regarding another field: Before TFS 2010, we had TFSUsers power tool. It allow you to map an old user name to a new user name. This is not available anymore because WI fields with user accounts are now synchronized with AD display names changes (explained here). The correct way to go about this in TFS 2010 is to use TFSConfig Identities before adding the new domain accounts into the TFS groups (documented here). So, too late for us. I’ve found a (tedious) workaround to change those old account in work items in order to allow people to keep working with them. 1. Install TFS 2010 power tools 2. Export WIT from your project (VS | Tools | Process Editor | Work Item Types). Save the definition, for example: Original_MyProject_Task.xml 3. Copy the xml (NoReadOnly_MyProject_Task.xml) and edit it. From the field definition of ‘Activated By’, ‘Closed By’ and ‘Resolved By’, remove the following:        <WHENNOTCHANGED field="System.State">           <READONLY />         </WHENNOTCHANGED> 4. Import WIT in VS. Choose the new file (NoReadOnly_MyProject_Task.xml) and import it in MyProject 5. Open all tasks in Excel (flat list). Display the following columns: Asssigned To Activated By Closed By Resolved By Change the user accounts to the new ones (I usually sort each column alphabetically to make it easier). 6. Publish. If you get a conflict on a field, tough luck. You will have to manually choose “Local version” for each work item. I told you it was a tedious process. 7. Import original WIT (Original_MyProject_Task.xml) in MyProject. We only changed the WI definition so that we could change some fields. The original definition should be put back. And what about these other fields? Created By Authorized As These fields are not editable by definition (VS | Tools | Process Editor | Work Item Fields Explorer), even if they are not marked as read-only in the WIT. You can leave the old values. It doesn’t seem to matter to TFS. The other four fields are editable by definition, so only the WIT readonly rule prevents us from changing them. Technorati Tags: TFS,Team Foundation Server 2010,Work Item,Domain change

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • Graphics trouble after resuming from hibernate or suspend

    - by Voyagerfan5761
    I have a Dell Inspiron 2650 (with NVidia graphics, using nouveau drivers) that I'm using to try out Ubuntu. It's all great, except that Hibernate and Suspend aren't usable. Yes, I know that questions about power-save issues are rampant in the Linux support universe, but it seems that every time I find a solution it's for a very specific hardware combination and doesn't apply to me. So anyway, here goes. When I resume from either power-saving mode, I'll get graphics problems anywhere on the range from a few scattered random-colored pixels that won't change; all the way to full-screen patterns that don't change as I move the mouse, hit keys on the keyboard, or even bring up the shutdown dialog using the power button. Those full-screen issues (which may involve stripes with random pixels, partial black screen, or both) always end in me forcing the machine to shut down by holding the power button. I haven't done much testing yet to determine what severity level is most commonly associated with each mode, but I do avoid using either power-save option because of these issues. I'll add info on my hardware as I can gather it (no home internet connection, and this laptop is tethered to my desk by a dead battery and casing degradation). Please feel free to request something specific in the question comments. Hardware Info See this hardinfo report for my system's hardware configuration. (No, my username is not "myuser"; I sanitized hardinfo's output before publishing it.) Screenshots These screenshots are from a relatively mild occurrence, which happened after the second hibernation I took that session. The first one worked great, though I used the wireless card and Firefox heavily between the two hibernation attempts. Take a look at what happened when I opened my home directory in Nautilus and scrolled it: See below for the situations I've tested so far. The real trouble comes when the machine resumes to an unusable state; in such cases I can't even unlock the screen or properly reboot, much less take a screenshot. I have a hunch that putting a CD in the drive will cause such major failures, and I will try that at some point; see related question. Situations Tested Maverick (10.10) Suspend Seems to suspend nicely with nothing running Seems to suspend nicely with flash drive plugged in On resume from suspend with no flash drive, Terminal and gedit running: Funky graphics on top of log output, then blank screen with pixelated cursor; no response to power button (normally will shutdown 60 seconds later) Hibernate Seems to hibernate nicely with nothing running Seems to hibernate nicely with a few apps (Terminal, Mouse preferences) running Seems to not hibernate when flash drive plugged in Seems to not hibernate when System Monitor is running Have encountered failed hibernation (after several hours and one successful hibernate/thaw cycle) with no external media connected and no programs running except normal background stuff Natty LiveCD (11.04_2010-12-22) When I tested it, Natty wouldn't stay logged in. It played part of the login sound and then [ OK ] appeared in the top right corner (white-on-black terminal text) for a few seconds. Then it kicked me back to the Unlock screen. It did that four times before I gave up and just tested suspend from the Unlock screen. Suspend Resumed to vertical gray and black lines 2px (?) wide, then shifted to vertical "jail bars" of black over a black screen with above-described random pixels and mouse pointer. No apparent response to input from mouse (clicking randomly). Keyboard and touchpad unrecognized.

    Read the article

  • Make your TSQL easier to read during a presentation

    - by Jonathan Allen
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it’s not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a lot less risky. In any session that involves TSQL there is a trade off between the speaker having all the code on screen and the attendees being able to read any of what is on screen. You (the speaker) might be able to read this when you are working on the code but plenty of your audience wont be able to make head or tail of it. SSMS 2012 has a zoom facility that can help: but don’t go nuts … Having the font too big means you will be scrolling a lot and the code will again be rendered unreadable. There is more though but you need to take a deep breath and open the Tools menu and delve into the SSMS options. In previous versions of SSMS this is a deep, dark and scary place where changing values can be obscure and sometimes catastrophic to the UI when you get back to the code editor. First things first, we set out as a good DBA and save our current (and presumably acceptable) SSMS configuration. From the import and Export Settings you can set up a file to hold all of the settings that you currently have. The wizard will open and ask you to pick an option. This time around choose to export settings. hit next and next again and then name your settings profile in the final step of the wizard and then click Finish. Once this is done then you can change whatever you like and always get back to this configuration in a couple of clicks. So what can you change to make for a good experience? Well there are plenty of things that can be altered but don’t go too mad and change too many things without taking a look at the results for every item on the list above you can change font, size, weight, colour, background colour etc. etc. but consider what you are trying to achieve and take it slowly. I have seen presenters with their settings set to have a yellow highlight and black font rather than the default pale blue background and slightly darker font so to achieve that select Text Editor and then select “Selected Text” in the Display Items listbox. As you change things the Sample area give you an idea of what effect you are going to have. Black and yellow is the colour combination with the highest contrast – that’s why bees and wasps# are that colour. What next? how about increasing the default font for your demo scripts? This means that any script you open and any new ones that you start will take on this font. No more zooming (or forgetting to) in the middle of sessions. now don’t forget to save this profile – follow the same steps as above but give the profile a different name, something like PresentationBigFontHighContrast might be appropriate. Once you are done making changes, export the settings once more and then go into the Import Export wizard and import settings from the first profile you created. Everything will be back to normal. Now making changes to suit your environment can be done very easily and with confidence. * – and warning tape and safety signs and so forth – Health and Safety officers simply copy nature!

    Read the article

  • Help understand GLSL directional light on iOS (left handed coord system)

    - by Robse
    I now have changed from GLKBaseEffect to a own shader implementation. I have a shader management, which compiles and applies a shader to the right time and does some shader setup like lights. Please have a look at my vertex shader code. Now, light direction should be provided in eye space, but I think there is something I don't get right. After I setup my view with camera I save a lightMatrix to transform the light from global space to eye space. My modelview and projection setup: - (void)setupViewWithWidth:(int)width height:(int)height camera:(N3DCamera *)aCamera { aCamera.aspect = (float)width / (float)height; float aspect = aCamera.aspect; float far = aCamera.far; float near = aCamera.near; float vFOV = aCamera.fieldOfView; float top = near * tanf(M_PI * vFOV / 360.0f); float bottom = -top; float right = aspect * top; float left = -right; // projection GLKMatrixStackLoadMatrix4(projectionStack, GLKMatrix4MakeFrustum(left, right, bottom, top, near, far)); // identity modelview GLKMatrixStackLoadMatrix4(modelviewStack, GLKMatrix4Identity); // switch to left handed coord system (forward = z+) GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeScale(1, 1, -1)); // transform camera GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeWithMatrix3(GLKMatrix3Transpose(aCamera.orientation))); GLKMatrixStackTranslate(modelviewStack, -aCamera.position.x, -aCamera.position.y, -aCamera.position.z); } - (GLKMatrix4)modelviewMatrix { return GLKMatrixStackGetMatrix4(modelviewStack); } - (GLKMatrix4)projectionMatrix { return GLKMatrixStackGetMatrix4(projectionStack); } - (GLKMatrix4)modelviewProjectionMatrix { return GLKMatrix4Multiply([self projectionMatrix], [self modelviewMatrix]); } - (GLKMatrix3)normalMatrix { return GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3([self modelviewProjectionMatrix]), NULL); } After that, I save the lightMatrix like this: [self.renderer setupViewWithWidth:view.drawableWidth height:view.drawableHeight camera:self.camera]; self.lightMatrix = [self.renderer modelviewProjectionMatrix]; And just before I render a 3d entity of the scene graph, I setup the light config for its shader with the lightMatrix like this: - (N3DLight)transformedLight:(N3DLight)light transformation:(GLKMatrix4)matrix { N3DLight transformedLight = N3DLightMakeDisabled(); if (N3DLightIsDirectional(light)) { GLKVector3 direction = GLKVector3MakeWithArray(GLKMatrix4MultiplyVector4(matrix, light.position).v); direction = GLKVector3Negate(direction); // HACK -> TODO: get lightMatrix right! transformedLight = N3DLightMakeDirectional(direction, light.diffuse, light.specular); } else { ... } return transformedLight; } You see the line, where I negate the direction!? I can't explain why I need to do that, but if I do, the lights are correct as far as I can tell. Please help me, to get rid of the hack. I'am scared that this has something to do, with my switch to left handed coord system. My vertex shader looks like this: attribute highp vec4 inPosition; attribute lowp vec4 inNormal; ... uniform highp mat4 MVP; uniform highp mat4 MV; uniform lowp mat3 N; uniform lowp vec4 constantColor; uniform lowp vec4 ambient; uniform lowp vec4 light0Position; uniform lowp vec4 light0Diffuse; uniform lowp vec4 light0Specular; varying lowp vec4 vColor; varying lowp vec3 vTexCoord0; vec4 calcDirectional(vec3 dir, vec4 diffuse, vec4 specular, vec3 normal) { float NdotL = max(dot(normal, dir), 0.0); return NdotL * diffuse; } ... vec4 calcLight(vec4 pos, vec4 diffuse, vec4 specular, vec3 normal) { if (pos.w == 0.0) { // Directional Light return calcDirectional(normalize(pos.xyz), diffuse, specular, normal); } else { ... } } void main(void) { // position highp vec4 position = MVP * inPosition; gl_Position = position; // normal lowp vec3 normal = inNormal.xyz / inNormal.w; normal = N * normal; normal = normalize(normal); // colors vColor = constantColor * ambient; // add lights vColor += calcLight(light0Position, light0Diffuse, light0Specular, normal); ... }

    Read the article

  • Notes - Part II - Play with JavaFX

    - by Silviu Turuga
    Open the project from last lesson Double click on NotesUI.fmxl, this will open the JavaFX Scene Builder On the left side you have a area called Hierarchy, from there press Del or Shift+Backspace on Mac to delete the Button and the Label. You'll receive a warning, that some components have been assigned an fx:id, click Delete as we don't need them anymore. Resize the AnchorPane to have enough room for our design, eg. 820x550px From the top left pick the Container called Accordion and drag over the AnchorPane design Chose then from Controls a List View and drag inside the Accordion. You'll notice that by default the Accordion has 2 TitledPane, and you can switch between them by clicking on their name. I'll let you the pleasure to do the rest in order to get the following result  Here is the list of objects used Save it and then return to NetBeans Run the application and it should be run without any issue. If you click on buttons they all are functional, but nothing happens as we didn't link them with any action. We'll see this in the next episode. Now, let's play a little bit with the application and try to resize it… Have you notice the behavior? If the form is too small, some objects aren't visible, if it is too large there is too much space . That's for sure something that your users won't like and you as a programmer have to care about this. From NetBeans double click NotesUI.fmxl so to return back to JavaFX Scene Builder Select the TextField from bottom left of Notes, the one where I put the text Category and then from the right part of JavaFX Scene Builder you'll notice a panel called Inspector. Chose Layout and then click on the dotted lines from left and bottom of the square, like you see in the below image This will make the textfield to have always the same distance from left and bottom no matter the size of the form. Save and run the application. Note that whenever the form is changing the Height, the Category TextField has the same distance from the bottom. Select Accordion and do the same steps but also check the top dotted line, because we want the Accordion to have the same height as the main form has. I'll let you the pleasure to do the same for the rest of components. It's very important to design an application that can be resize by user and in the same time, all the buttons are on place. Last step is to make sure our application is not getting smaller then a certain size, as this will hide parts of our layout. So select the AnchorPane and from Inspector go to Layout and note down the Width and Height. Go back to NetBeans and open the file Main.java and add the following code just after stage.setScene(scene); (around line 26) stage.setMinWidth(820); stage.setMinHeight(550); Use your own width and height. This will prevent user to reduce the width or height of your application to a value that will hide parts of your layout. So now you should have done most of the design part and next time we'll see how can we enter some data into our newly created application… Note: in case you miss something, here are the source files of the project till this point. 

    Read the article

  • The World of SQL Database Deployment

    - by GGBlogger
    In my early development days, I used Microsoft Access for building databases. It made things easy since I only needed to package the database with the installation package so my clients would have access to it. When we began the development of a new package in Visual Studio .NET I decided to use SQL Server Express. It was free and provided good tools - also free. I thought it was a tremendous idea until it came time to distribute our new software! What a surprise. The nightmare Ah, the choices! Detach the database and have the client reattach it to a newly installed – oh wait. FIRST my new client needs to download and install SQL Server Express with SQL Server Management Studio. That’s not a great thing, but it is one more nightmare step for users who may have other versions of SQL installed. Then the question became – do we detach and reattach or do we do a backup. It was too late (bad planning) to revert to Microsoft Access but we badly needed a simple way to package and distribute both the database AND sample contents. Red Gate to the rescue It took me a while to find an answer but I did find it in a package called SQL Packager sold by a relatively unpublicized company in England called Red Gate. They call their products “ingeniously simple” and I must agree with that description. With SQL Packager you point to the database (more in a minute) you want to distribute. A few mouse clicks and dialogs and you have an executable file that you can ship virtually anywhere and virtually any way which, when run, installs the database on your destination SQL Server instance! It really is that simple. Easier to show than tell Let’s explore a hypothetical case. Let’s say you have a local SQL database of customers and you have decided you want to share it with your subsidiaries or partners. Here is the underlying screen you will see on starting SQL Packager. There are a bunch of possibilities here but I’m going to keep this relatively simple. At this point I simply want to illustrate the simplicity of generating an executable to deliver your database. You will notice that you can set up a new package, edit an existing package or change a bunch of options. Start SQL packager And the following is the default dialog you get on startup. In the next dialog, I’ve selected the Server and Database. I’ve also selected Windows Authentication. Pressing Next causes SQL Packager to run a number of checks and produce a report. Now you’re given a comprehensive list of what is going to be packaged and you’re allowed to change it if you desire. I’ve never made any changes here so I can’t really make any suggestions. The just illustrates the comprehensive nature of so many Red Gate products including this one. Clicking Next gives you still further options. SQL Packager then works its magic and shows you a dialog with the results. Packager then gives you a dialog of the scripts it has generated. The capture above only shows 1 of 4 tabs. Finally pressing Next gives you the option to generate a .NET executable of a C# project. I’ve only generated an executable so I’m not in a position to tell you what the C# project looks like. That may be the subject of further discussions. You can rename the package and tell SQL Packager where to save it. I’ve skipped a lot but this will serve to illustrate the comprehensive (and ingenious) things Red Gate does. All in all, it’s a superb way to distribute populated SQL databases. Oh – we’ll save running the resulting executable for later also but believe me it’s insanely simple.

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Is my implementation of A* wrong?

    - by Bloodyaugust
    I've implemented the A* algorithm in my program. However, it would seem to be functioning incorrectly at times. Below is a screenshot of one such time. The obviously shorter line is to go immediately right at the second to last row. Instead, they move down, around the tower, and continue to their destination (bottom right from top left). Below is my actual code implementation: nodeMap.prototype.findPath = function(p1, p2) { var openList = []; var closedList = []; var nodes = this.nodes; for (var i = 0; i < nodes.length; i++) { //reset heuristics and parents for nodes var curNode = nodes[i]; curNode.f = 0; curNode.g = 0; curNode.h = 0; curNode.parent = null; if (curNode.pathable === false) { closedList.push(curNode); } } openList.push(this.getNode(p1)); while(openList.length > 0) { // Grab the lowest f(x) to process next var lowInd = 0; for(i=0; i<openList.length; i++) { if(openList[i].f < openList[lowInd].f) { lowInd = i; } } var currentNode = openList[lowInd]; if (currentNode === this.getNode(p2)) { var curr = currentNode; var ret = []; while(curr.parent) { ret.push(curr); curr = curr.parent; } return ret.reverse(); } closedList.push(currentNode); for (i = 0; i < openList.length; i++) { //remove currentNode from openList if (openList[i] === currentNode) { openList.splice(i, 1); break; } } for (i = 0; i < currentNode.neighbors.length; i++) { if(closedList.indexOf(currentNode.neighbors[i]) !== -1 ) { continue; } if (currentNode.neighbors[i].isPathable === false) { closedList.push(currentNode.neighbors[i]); continue; } var gScore = currentNode.g + 1; // 1 is the distance from a node to it's neighbor var gScoreIsBest = false; if (openList.indexOf(currentNode.neighbors[i]) === -1) { //save g, h, and f then save the current parent gScoreIsBest = true; currentNode.neighbors[i].h = currentNode.neighbors[i].heuristic(this.getNode(p2)); openList.push(currentNode.neighbors[i]); } else if (gScore < currentNode.neighbors[i].g) { //current g better than previous g gScoreIsBest = true; } if (gScoreIsBest) { currentNode.neighbors[i].parent = currentNode; currentNode.neighbors[i].g = gScore; currentNode.neighbors[i].f = currentNode.neighbors[i].g + currentNode.neighbors[i].h; } } } return false; } Towers block pathability. Is there perhaps something I am missing here, or does A* not always find the shortest path in a situation such as this? Thanks in advance for any help.

    Read the article

  • Caching with AVPlayer and AVAssetExportSession

    - by tba
    I would like to cache progressive-download videos using AVPlayer. How can I save an AVPlayer's item to disk? I'm trying to use AVAssetExportSession on the player's currentItem (which is fully loaded). This code is giving me "AVAssetExportSessionStatusFailed (The operation could not be completed)" : AVAsset *mediaAsset = self.player.currentItem.asset; AVAssetExportSession *es = [[AVAssetExportSession alloc] initWithAsset:mediaAsset presetName:AVAssetExportPresetLowQuality]; NSString *outPath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"out.mp4"]; NSFileManager *fileManager = [NSFileManager defaultManager]; [fileManager removeItemAtPath:outPath error:NULL]; es.outputFileType = @"com.apple.quicktime-movie"; es.outputURL = [[[NSURL alloc] initFileURLWithPath:outPath] autorelease]; NSLog(@"exporting to %@",outPath); [es exportAsynchronouslyWithCompletionHandler:^{ NSString *status = @""; if( es.status == AVAssetExportSessionStatusUnknown ) status = @"AVAssetExportSessionStatusUnknown"; else if( es.status == AVAssetExportSessionStatusWaiting ) status = @"AVAssetExportSessionStatusWaiting"; else if( es.status == AVAssetExportSessionStatusExporting ) status = @"AVAssetExportSessionStatusExporting"; else if( es.status == AVAssetExportSessionStatusCompleted ) status = @"AVAssetExportSessionStatusCompleted"; else if( es.status == AVAssetExportSessionStatusFailed ) status = @"AVAssetExportSessionStatusFailed"; else if( es.status == AVAssetExportSessionStatusCancelled ) status = @"AVAssetExportSessionStatusCancelled"; NSLog(@"done exporting to %@ status %d = %@ (%@)",outPath,es.status, status,[[es error] localizedDescription]); }]; How can I export successfully? I'm looking into copying mediaAsset into an AVMutableComposition, but haven't had much luck with that either. Thanks! PS: Here are some questions from people trying to accomplish the same thing (but with MPMoviePlayerController): Cache Progressive downloaded content in MPMoviePlayerController Simultaneously stream and save a video? Caching videos to disk after successful preload by MPMoviePlayerController

    Read the article

  • Spring Framework 3 and session attributes

    - by newbie
    I have form object that I set to request in GET request handler in my Spring controller. First time user enters to page, a new form object should be made and set to request. If user sends form, then form object is populated from request and now form object has all user givern attributes. Then form is validated and if validation is ok, then form is saved to database. If form is not validated, I want to save form object to session and then redirect to GET request handling page. When request is redirected to GET handler, then it should check if session contains form object. I have figured out that there is @SessionAttributes("form") annotation in Spring, but for some reason following doesnt work, because at first time, session attribute form is null and it gives error: org.springframework.web.HttpSessionRequiredException: Session attribute 'form' required - not found in session Here is my controller: @RequestMapping(value="form", method=RequestMethod.GET) public ModelAndView viewForm(@ModelAttribute("form") Form form) { ModelAndView mav = new ModelAndView("form"); if(form == null) form = new Form(); mav.addObject("form", form); return mav; } @RequestMapping(value="form", method=RequestMethod.POST) @Transactional(readOnly = true) public ModelAndView saveForm(@ModelAttribute("form") Form form) { FormUtils.populate(form, request); if(form.validate()) { formDao.save(); } else { return viewForm(form); } return null; }

    Read the article

  • Parse.com REST API in Java (NOT Android)

    - by Orange Peel
    I am trying to use the Parse.com REST API in Java. I have gone through the 4 solutions given here https://parse.com/docs/api_libraries and have selected Parse4J. After importing the source into Netbeans, along with importing the following libraries: org.slf4j:slf4j-api:jar:1.6.1 org.apache.httpcomponents:httpclient:jar:4.3.2 org.apache.httpcomponents:httpcore:jar:4.3.1 org.json:json:jar:20131018 commons-codec:commons-codec:jar:1.9 junit:junit:jar:4.11 ch.qos.logback:logback-classic:jar:0.9.28 ch.qos.logback:logback-core:jar:0.9.28 I ran the example code from https://github.com/thiagolocatelli/parse4j Parse.initialize(APP_ID, APP_REST_API_ID); // I replaced these with mine ParseObject gameScore = new ParseObject("GameScore"); gameScore.put("score", 1337); gameScore.put("playerName", "Sean Plott"); gameScore.put("cheatMode", false); gameScore.save(); And I got that it was missing org.apache.commons.logging, so I downloaded that and imported it. Then I ran the code again and got Exception in thread "main" java.lang.NoSuchMethodError: org.slf4j.spi.LocationAwareLogger.log(Lorg/slf4j/Marker;Ljava/lang/String;ILjava/lang/String;Ljava/lang/Throwable;)V at org.apache.commons.logging.impl.SLF4JLocationAwareLog.debug(SLF4JLocationAwareLog.java:120) at org.apache.http.client.protocol.RequestAddCookies.process(RequestAddCookies.java:122) at org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:131) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:193) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:85) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) at org.parse4j.command.ParseCommand.perform(ParseCommand.java:44) at org.parse4j.ParseObject.save(ParseObject.java:450) I could probably fix this with another import, but I suppose then something else would pop up. I tried the other libraries with similar results, missing a bunch of libraries. Has anyone actually used REST API successfully in Java? If so, I would be grateful if you shared which library/s you used and anything else required to get it going successfully. I am using Netbeans. Thanks.

    Read the article

  • Linq to SQL EntitySet Binding the MVVM way

    - by Savvas Sopiadis
    Hi everybody! In a WPF application i'm using LINQ to SQL classes (created by SQL Metal, thus implementing POCOs). Let's assume i have a table User and a Table Pictures. These pictures are actually created from one picture, the difference between them may be the size, coloring,... So every user may has more than one Pictures, so the association is 1:N (User:Pictures). My problems: a) how do i bind, in a MVVM manner, a picture control to one picture (i will take one specific picture) in the EntitySet, to show it up? b) everytime a user changes her picture the whole EntitySet should be thrown away and the newly created Picture(s) should be a added. Is this the correct way? e.g. //create the 1st piture object UserPicture1 = new UserPicture(); UserPicture1.Description = "... some description.. "; USerPicture1.Image = imgBytes; //array of bytes //create the 2nd piture object UserPicture2 = new UserPicture(); UserPicture2.Description = "... another description.. "; UserPicture2.Image = DoSomethingWithPreviousImg(imgBytes); //array of bytes //Assuming that the entityset is called Pictures //add these pictures to the corresponding user User.Pictures.Add(UserPicture1); User.Pictures.Add(UserPicture2); //save changes datacontext.Save() Thanks in advance

    Read the article

  • Open a URL with parameters using selenium.open()

    - by gagneet
    I am using selenium.open(), to open a URL, which prints the cookie output to the browser window: String cookiestr = "http://my.server.com/cookie?out=text"; selenium.open( cookiestr ); The problem is that, it opens a "Save As..." popup, to save the file named "cookie". When I open the same URL in my browser directly, it displays text in the browser window. I want to capture the body text shown, when I open the URL, but am unable to do so. Is there any other command available which I can use to do this? BufferedWriter outputfile = null; String bodytext = selenium.getBodyText(); System.out.println("Body Text :" + bodytext); Integer I = new Integer(i); filename = "C:\\cookies\\" + I.toString() + ".txt"; outputfile = new BufferedWriter(new FileWriter( filename )); outputfile.write( bodytext ); outputfile.newLine(); i++;

    Read the article

  • Sending formatted Lotus Notes rich text email from Excel VBA

    - by Lunatik
    I have little Lotus Script or Notes/Domino knowledge but I have a procedure, copied from somewhere a long time ago, that allows me to email through Notes from VBA. I normally only use this for internal notifications where the formatting hasn't really mattered. I now want to use this to send external emails to a client, and corporate types would rather the email complied with our style guide (a sans-serif typeface basically). I was about to tell them that the code only works with plain text, but then I noticed that the routine does reference some sort of CREATERICHTEXTITEM object. Does this mean I could apply some sort of formatting to the body text string after it has been passed to the mail routine? As well as upholding our precious brand values, this would be quite handy to me for highlighting certain passages in the email. I've had a dig about the 'net to see if this code could be adapted, but being unfamiliar with Notes' object model, and the fact that online Notes resources seem to mirror the application's own obtuseness, meant I didn't get very far. The code: Sub sendEmail(EmailSubject As String, EMailSendTo As String, EMailBody As String, MailServer as String) Dim objNotesSession As Object Dim objNotesMailFile As Object Dim objNotesDocument As Object Dim objNotesField As Object Dim sendmail As Boolean 'added for integration into reporting tool Dim dbString As String dbString = "mail\" & Application.UserName & ".nsf" On Error GoTo SendMailError 'Establish Connection to Notes Set objNotesSession = CreateObject("Notes.NotesSession") On Error Resume Next 'Establish Connection to Mail File Set objNotesMailFile = objNotesSession.GETDATABASE(MailServer, dbString) 'Open Mail objNotesMailFile.OPENMAIL On Error GoTo 0 'Create New Memo Set objNotesDocument = objNotesMailFile.createdocument Dim oWorkSpace As Object, oUIdoc As Object Set oWorkSpace = CreateObject("Notes.NotesUIWorkspace") Set oUIdoc = oWorkSpace.CurrentDocument 'Create 'Subject Field' Set objNotesField = objNotesDocument.APPENDITEMVALUE("Subject", EmailSubject) 'Create 'Send To' Field Set objNotesField = objNotesDocument.APPENDITEMVALUE("SendTo", EMailSendTo) 'Create 'Copy To' Field Set objNotesField = objNotesDocument.APPENDITEMVALUE("CopyTo", EMailCCTo) 'Create 'Blind Copy To' Field Set objNotesField = objNotesDocument.APPENDITEMVALUE("BlindCopyTo", EMailBCCTo) 'Create 'Body' of memo Set objNotesField = objNotesDocument.CREATERICHTEXTITEM("Body") With objNotesField .APPENDTEXT emailBody .ADDNEWLINE 1 End With 'Send the e-mail Call objNotesDocument.Save(True, False, False) objNotesDocument.SaveMessageOnSend = True 'objNotesDocument.Save objNotesDocument.Send (0) 'Release storage Set objNotesSession = Nothing Set objNotesMailFile = Nothing Set objNotesDocument = Nothing Set objNotesField = Nothing 'Set return code sendmail = True Exit Sub SendMailError: Dim Msg Msg = "Error # " & Str(Err.Number) & " was generated by " _ & Err.Source & Chr(13) & Err.Description MsgBox Msg, , "Error", Err.HelpFile, Err.HelpContext sendmail = False End Sub

    Read the article

  • jQuery Ajax call - Set variable value on success.

    - by Nathan
    Hey all, I have an application that I am writing that modifies data on a cached object in the server. The modifications are performed through an ajax call that basically updates properties of that object. When the user is done working, I have a basic 'Save Changes' button that allows them to Save the data and flush the cached object. In order to protect the user, I want to warn them if the try to navigate away from the page when modifications have been made to the server object if they have not saved. So, I created a web service method called IsInitialized that will return true or false based on whether or not changes have been saved. If they have not been saved, I want to prompt the user and give them a chance to cancel their navigation request. Here's my problem - although I have the calls working correctly, I can't seem to get the ajax success call to set the variable value on its callback function. Here's the code I have now. ////Catches the users to keep them from navigation off the page w/o saved changes... window.onbeforeunload = CheckSaveStatus; var IsInitialized; function CheckSaveStatus() { var temp = $.ajax({ type: "POST", url: "URL.asmx/CheckIfInstanceIsInitilized", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(result) { IsInitialized = result.d; }, error: function(xmlHttpRequest, status, err) { alert(xmlHttpRequest.statusText + " " + xmlHttpRequest.status + " : " + xmlHttpRequest.responseText); } }); if (IsInitialized) { return "You currently have unprocessed changes for this Simulation."; } } I feel that I might be trying to use the Success callback in an inappropriate manner. How do I set a javascript variable on the Success callback so that I can decide whether or not the user should be prompted w/ the unsaved changes message? As was just pointed out, I am making an asynchronous call, which means the rest of the code gets called before my method returns. Is there a way to use that ajax call, but still catch the window.onunload event? (without making synchronos ajax)

    Read the article

  • CKEditor inside jQuery Dialog, how do I build it?

    - by Ben Dauphinee
    So, I'm working with CKEditor and jQuery, trying to build a pop-out editor. Below is what I have coded so far, and I can't seem to get it working the way I want it to. Basically, click the 'Edit' link, dialog box pops up, with the content to edit loaded into the CKEditor. Also, not required, but helpful if you can suggest how to do it. I can't seem to find out how to make the save button work in CKEditor (though I think the form will do it). Thanks in advance for any help. $(document).ready(function(){ var config = new Array(); config.height = "350px"; config.resize_enabled = false; config.tabSpaces = 4; config.toolbarCanCollapse = false; config.width = "700px"; config.toolbar_Full = [["Save","-","Cut","Copy","Paste","-","Undo","Redo","-","Bold","Italic", "-", "NumberedList","BulletedList","-","Link","Unlink","-","Image","Table"]]; $("a.opener").click(function(){ var editid = $(this).attr("href"); var editwin = \'<form><div id="header"><input type="text"></div><div id="content"><textarea id="content"></textarea></div></form>\'; var $dialog = $("<div>"+editwin+"</div>").dialog({ autoOpen: false, title: "Editor", height: 360, width: 710, buttons: { "Ok": function(){ var data = $(this).val(); } } }); //$(this).dialog("close"); $.getJSON("ajax/" + editid, function(data){ alert("datagrab"); $dialog.("textarea#content").html(data.content).ckeditor(config); alert("winset"); $dialog.dialog("open"); }); return false; }); });

    Read the article

  • Writing a docx file to a BLOB using Java 1.4 inside Oracle 10g

    - by Jon Renaut
    I'm trying to generate a blank docx file using Java, add some text, then write it to a BLOB that I can return to our document processing engine (a custom mess of PL/SQL and Java). I have to use the 1.4 JVM inside Oracle 10g, so no Java 1.5 stuff. I don't have a problem writing the docx to a file on my local machine, but when I try to write to BLOB, I'm getting garbage. Am I doing something dumb? Any help is appreciated. Note in the code below, all the get[name]Xml() methods return an org.w3c.dom.Document. public void save(String fileName) throws Exception { ZipOutputStream zos = new ZipOutputStream(new FileOutputStream(fileName)); addEntry(zos, getDocumentXml(), "word/document.xml"); addEntry(zos, getContentTypesXml(), "[Content_Types].xml"); addEntry(zos, getRelsXml(), "_rels/.rels"); zos.flush(); zos.close(); } public java.sql.BLOB save() throws Exception { java.sql.Connection conn = DbUtilities.openConnection(); BLOB outBlob = BLOB.createTemporary(conn, true, BLOB.DURATION_SESSION); outBlob.open(BLOB.MODE_READWRITE); ZipOutputStream zos = new ZipOutputStream(outBlob.setBinaryStream(0L)); addEntry(zos, getDocumentXml(), "word/document.xml"); addEntry(zos, getContentTypesXml(), "[Content_Types].xml"); addEntry(zos, getRelsXml(), "_rels/.rels"); zos.flush(); zos.close(); return outBlob; } private void addEntry(ZipOutputStream zos, Document doc, String fileName) throws Exception { Transformer t = TransformerFactory.newInstance().newTransformer(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); t.transform(new DOMSource(doc), new StreamResult(baos)); ZipEntry ze = new ZipEntry(fileName); byte[] data = baos.toByteArray(); ze.setSize(data.length); zos.putNextEntry(ze); zos.write(data); zos.flush(); zos.closeEntry(); }

    Read the article

  • how to map SubclassMap and HasManyToMany in Fluent NHibernate

    - by Davide Orazio Montersino
    Hi everyone. My problem is fluent nhibernate mapping a many to many relationship, they end up referencing a non existent Id. public UserMap() { Id(x => x.Id); Map(x => x.Name); Map(x => x.Password); Map(x => x.Confirmed); HasMany(x => x.Nodes).Cascade.SaveUpdate(); HasManyToMany<Node>(x => x.Events).Cascade.SaveUpdate().Table("RSVPs"); } public EventMap() { Map(x => x.Starts); Map(x => x.Ends); HasManyToMany<User>(x => x.Rsvps).Cascade.SaveUpdate().Table("RSVPs"); } public NodeMap() { Id(x => x.Id); Map(x => x.Title); Map(x => x.Body).CustomSqlType("text"); Map(x => x.CreationDate); References(x => x.Author).Cascade.SaveUpdate(); Map(x => x.Permalink).Unique().Not.Nullable(); } Those are my classes -notice that Event inherits from Node: public class Event : Node//, IEvent { private DateTime _starts = DateTime.MinValue; private DateTime _ends = DateTime.MaxValue; public virtual IList<User> Rsvps { get; set; } } The problem is, the generated RSVPs table is like that: Event_id User_id Node_id Of course the Event table has no ID - only a Node_id. When trying to save a relationship it will try to save a NULL event_id thus generating an error.

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >