Search Results

Search found 15875 results on 635 pages for 'border box'.

Page 593/635 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • My own personal use of Oracle Linux

    - by wcoekaer
    It always is easier to explain something with examples... Many people still don't seem to understand some of the convenient things around using Oracle Linux and since I personally (surprise!) use it at home, let me give you an idea. I have quite a few servers at home and I also have 2 hosted servers with a hosted provider. The servers at home I use mostly to play with random Linux related things, or with Oracle VM or just try out various new Oracle products to learn more. I like the technology, it's like a hobby really. To be able to have a good installation experience and use an officially certified Linux distribution and not waste time trying to find the right libraries, I, of course, use Oracle Linux. Now, at least I can get a copy of Oracle Linux for free (even if I was not working for Oracle) and I can/could use that on as many servers at home (or at my company if I worked elsewhere) for testing, development and production. I just go to http://edelivery.oracle.com/linux and download the version(s) I want and off I go. Now, I also have the right (and not because I am an employee) to take those images and put them on my own server and give them to someone else, I in fact, just recently set up my own mirror on my own hosted server. I don't have to remove oracle-logos, I don't have to rebuild the ISO images, I don't have to recompile anything, I can just put the whole binary distribution on my own server without contract. Perfectly free to do so. Of course the source code of all of this is there, I have a copy of the UEK code at home, just cloned from https://oss.oracle.com/git/?p=linux-2.6-unbreakable.git. And as you can see, the entire changelog, checkins, merges from Linus's tree, complete overview of everything that got changed from kernel to kernel, from patch to patch, errata to errata. No obfuscating, no tar balls and spending time with diff, or go read bug reports to find out what changed (seems silly to me). Some of my servers are on the external network and I need to be current with security errata, but guess what, no problem, my servers are hooked up to http://public-yum.oracle.com which is open, free, and completely up to date, in a consistent, reliable way with any errata, security or bugfix. So I have nothing to worry about. Also, not because I am an employee. Anyone can. And, with this, I also can, and have, set up my own mirror site that hosts these RPMs. both binary and source rpms. Because I am free to get them and distribute them. I am quite capable of supporting my servers on my own, so I don't need to rely on the support organization so I don't need to have a support subscription :-). So I don't need to pay. Neither would you, at least not with Oracle Linux. Another cool thing. The hosted servers came (unfortunately) with Centos installed. While Centos works just fine as is, I tend to prefer to be current with my security errata(reliably) and I prefer to just maintain one yum repository instead of 2, I converted them over to Oracle Linux as well (in place) so they happily receive and use the exact same RPMs. Since Oracle Linux is exactly the same from a user/application point of view as RHEL, including files like /etc/redhat-release and no changes from .el. to .centos. I know I have nothing to worry about installing one of the RHEL applications. So, OL everywhere makes my life a lot easier and why not... Next! Since I run Oracle VM and I have -tons- of VM's on my machines, in some cases on my big WOPR box I have 15-20 VMs running. Well, no problem, OL is free and I don't have to worry about counting the number of VMs, whether it's 1, or 4, or more than 10 ... like some other alternatives started doing... and finally :) I like to try out new stuff, not 3 year old stuff. So with UEK2 as part of OL6 (and 6.3 in particular) I can play with a 3.0.x based kernel and it just installs and runs perfectly clean with OL6, so quite current stuff in an environment that I know works, no need to toy around with an unsupported pre-alpha upstream distribution with libraries and versions that are not compatible with production software (I have nothing against ubuntu or fedora or opensuse... just not what I can rely on or use for what I need, and I don't need a desktop). pretty compelling. I say... and again, it doesn't matter that I work for Oracle, if I was working elsewhere, or not at all, all of the above would still apply. Student, teacher, developer, whatever. contrast this with $349 for 2 sockets and oneguest and selfsupport per year to even just get the software bits.

    Read the article

  • How to fix Ubuntu 12.04.3 boot to black screen full of errors in white text, after upgrading on dell inspiron 1501

    - by Ibuntu
    I am running a Dell Inspiron 1501 I use Linux only. No Microsoft or Apple operating systems (or really anything closed-source). I've only been using Linux for a little over a year but I'm starting to gain a comfortable level of familiarity with the system and terminology. I've been having some issues with Quantel Quetzal and Raring Ringtail, especially with older hardware, so I opted to install Ubuntu 12.04.3 Precise Pangolin on the Inspiron 1501. I checked my MD5 sum after downloading my ISO and all was good. I have in fact used this iso/dvd to install Precise Pangolin successfully on a few other systems (some of which are even older than this laptop). Install goes fine. The wireless card doesn't work out of the box but this is a known issue which is fairly easy to fix. So, first thing I did was open up a terminal and run sudo apt-get update && sudo apt-get upgrade which, part way through, crashed (I assume lightdm and possibly X) and took me to a black screen filled with white lines of text that were either errors or just the ouputs of commands. The reason I say that is because I was unable to gleam any useful information from the output on the screen. I did take a picture however and will post a link. After that, every time I boot the system it goes right to that black screen posting all the error messages or output in white text. I never get a purple Ubuntu splash, so from what I can tell after reading this wiki article: https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen That means that after the kernel is selected, it is unable to correctly implement the settings it needs. If the purple splash never shows, the frame buffer was never set correctly right? This leads me to believe that it could be a kernel issue? The wiki suggested to try and pinpoint the issue by rolling back kernels until I find one that works. Is this my best option? I think I'm going to give it a try anyways and will let everyone know if I am able to solve the issue this way. I have since done a few reinstalls and some trouble-shooting including a couple hours scouring the net for anyone with any kind of similar issue. Most of the issues I could find involved getting a black screen after login and none of them said anything about any information output on this black screen. My reinstalls have taught me that there is no issue updating, but as soon as I run sudo apt-get upgrade my system goes to the black screen and every time I boot it up it does the same thing. The only way to fix is by reinstall. I never get any ability to log in. After a hard power off to the laptop (because I cannot use ctrl+alt+del to reboot) when it boots again it goes to the grub boot menu and I can select between regular boot, recovery mode and the two memtest options. I never tried the memtest options but the other two both lead to the same black screen. Some people having a black/blank screen issue claim to have fixed it by using 12.10 or 13.04 but I believe they were having a different issue where they got a black/blank screen after logging in. I think I will still give these images a try, but mostly figured I would just wait another day or two for 13.10. Other things I figured I would try from the following three articles: After logging in, there's a black screen and my cursor, nothing else! in Ubuntu 12.10 Black Screen on Login After Upgrading to 12.04 I can't get to the login screen include opening a terminal using ctrl+alt+f1 and trying a variety of reseting unity, x settings, lightdm (or switching to gdm); but I doubt this will work or that I will even be able to access a terminal. I'm pretty sure the whole system is stuck after it loads the last line on the black screen. I will try these things and post more information when I have. Hopefully someone has an idea in the meantime and I will keep checking back trying to find a solution. Thank you. Here are 3 different pictures of the error message. I had to take with my phone: http://ubuntuone.com/album/0TBBkxmVajJIQQtoN9mVdN

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Custom page sizes in paging dropdown in Telerik RadGrid

    Working with Telerik RadControls for ASP.NET AJAX is actually quite easy and the initial effort to get started with the control suite is very low. Meaning that you can easily get good result with little time. But there are usually cases where you have to go a little further and dig a little bit deeper than the standard scenarios. In this article I am going to describe how you can customize the default values (10, 20 and 50) of the drop-down list in the paging element of RadGrid. Get control over the displayed page sizes while using numeric paging... The default page sizes are good but not always good enough The paging feature in RadGrid offers you 3, well actually 4, possible page sizes in the drop-down element out-of-the box, which are 10, 20 or 50 items. You can get a fourth option by specifying a value different than the three standards for the PageSize attribute, ie. 35 or 100. The drawback in that case is that it is the initial page size. Certainly, the available choices could be more flexible or even a little bit more intelligent. For example, by taking the total count of records into consideration. There are some interesting scenarios that would justify a customized page size element: A low number of records, like 14 or similar shouldn't provide a page size of 50, A high total count of records (ie: 300+) should offer more choices, ie: 100, 200, 500, or display of all records regardless of number of records I am sure that you might have your own requirements, and I hope that the following source code snippets might be helpful. Wiring the ItemCreated event In order to adjust and manipulate the existing RadComboBox in the paging element we have to handle the OnItemCreated event of RadGrid. Simply specify your code behind method in the attribute of the RadGrid tag, like so: <telerik:RadGrid ID="RadGridLive" runat="server" AllowPaging="true" PageSize="20"    AllowSorting="true" AutoGenerateColumns="false" OnNeedDataSource="RadGridLive_NeedDataSource"    OnItemDataBound="RadGrid_ItemDataBound" OnItemCreated="RadGrid_ItemCreated">    <ClientSettings EnableRowHoverStyle="true">        <ClientEvents OnRowCreated="RowCreated" OnRowSelected="RowSelected" />        <Resizing AllowColumnResize="True" AllowRowResize="false" ResizeGridOnColumnResize="false"            ClipCellContentOnResize="true" EnableRealTimeResize="false" AllowResizeToFit="true" />        <Scrolling AllowScroll="true" ScrollHeight="360px" UseStaticHeaders="true" SaveScrollPosition="true" />        <Selecting AllowRowSelect="true" />    </ClientSettings>    <MasterTableView DataKeyNames="AdvertID">        <PagerStyle AlwaysVisible="true" Mode="NextPrevAndNumeric" />        <Columns>            <telerik:GridBoundColumn HeaderText="Listing ID" DataField="AdvertID" DataType="System.Int32"                SortExpression="AdvertID" UniqueName="AdvertID">                <HeaderStyle Width="66px" />            </telerik:GridBoundColumn>             <!--//  ... and some more columns ... -->         </Columns>    </MasterTableView></telerik:RadGrid> To provide a consistent experience for your visitors it might be helpful to display the page size selection always. This is done by setting the AlwaysVisible attribute of the PagerStyle element to true, like highlighted above. Customize the values of page size Your delegate method for the ItemCreated event should look like this: protected void RadGrid_ItemCreated(object sender, GridItemEventArgs e){    if (e.Item is GridPagerItem)    {        var dropDown = (RadComboBox)e.Item.FindControl("PageSizeComboBox");        var totalCount = ((GridPagerItem)e.Item).Paging.DataSourceCount;        var sizes = new Dictionary<string, string>() {            {"10", "10"},            {"20", "20"},            {"50", "50"}        };        if (totalCount > 100)        {            sizes.Add("100", "100");        }        if (totalCount > 200)        {            sizes.Add("200", "200");        }        sizes.Add("All", totalCount.ToString());        dropDown.Items.Clear();        foreach (var size in sizes)        {            var cboItem = new RadComboBoxItem() { Text = size.Key, Value = size.Value };            cboItem.Attributes.Add("ownerTableViewId", e.Item.OwnerTableView.ClientID);            dropDown.Items.Add(cboItem);        }        dropDown.FindItemByValue(e.Item.OwnerTableView.PageSize.ToString()).Selected = true;    }} It is important that we explicitly check the event arguments for GridPagerItem as it is the control that contains the PageSizeComboBox control that we want to manipulate. To keep the actual modification and exposure of possible page size values flexible I am filling a Dictionary with the requested 'key/value'-pairs based on the number of total records displayed in the grid. As a final step, ensure that the previously selected value is the active one using the FindItemByValue() method. Of course, there might be different requirements but I hope that the snippet above provide a first insight into customized page size value in Telerik's Grid. The Grid demos describe a more advanced approach to customize the Pager.

    Read the article

  • How do I put different textures on different walls? LWJGL

    - by lehermj
    So far I have it so you are running around in a box, but all of the walls are the same texture! I've loaded up other textures for the walls (I want the walls a different texture than the floor) but it seems as if its being ignored... Here's my code: int floorTexture = glGenTextures(); { InputStream in = null; try { in = new FileInputStream("floor.png"); PNGDecoder decoder = new PNGDecoder(in); ByteBuffer buffer = BufferUtils.createByteBuffer(4 * decoder.getWidth() * decoder.getHeight()); decoder.decode(buffer, decoder.getWidth() * 4, Format.RGBA); buffer.flip(); glBindTexture(GL_TEXTURE_2D, floorTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, decoder.getWidth(), decoder.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer); glBindTexture(GL_TEXTURE_2D, floorTexture); } catch (FileNotFoundException ex) { System.err.println("Failed to find the texture files."); ex.printStackTrace(); Display.destroy(); System.exit(1); } catch (IOException ex) { System.err.println("Failed to load the texture files."); ex.printStackTrace(); Display.destroy(); System.exit(1); } finally { if (in != null) { try { in.close(); } catch (IOException e) { e.printStackTrace(); } } } } int wallTexture = glGenTextures(); { InputStream in = null; try { in = new FileInputStream("walls.png"); PNGDecoder decoder = new PNGDecoder(in); ByteBuffer buffer = BufferUtils.createByteBuffer(4 * decoder.getWidth() * decoder.getHeight()); decoder.decode(buffer, decoder.getWidth() * 4, Format.RGBA); buffer.flip(); glBindTexture(GL_TEXTURE_2D, wallTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, decoder.getWidth(), decoder.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer); glBindTexture(GL_TEXTURE_2D, wallTexture); } catch (FileNotFoundException ex) { System.err.println("Failed to find the texture files."); ex.printStackTrace(); Display.destroy(); System.exit(1); } catch (IOException ex) { System.err.println("Failed to load the texture files."); ex.printStackTrace(); Display.destroy(); System.exit(1); } finally { if (in != null) { try { in.close(); } catch (IOException e) { e.printStackTrace(); } } } } int ceilingDisplayList = glGenLists(1); glNewList(ceilingDisplayList, GL_COMPILE); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex3f(-gridSize, ceilingHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(gridSize, ceilingHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(gridSize, ceilingHeight, gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(-gridSize, ceilingHeight, gridSize); glEnd(); glEndList(); int wallDisplayList = glGenLists(1); glNewList(wallDisplayList, GL_COMPILE); glBegin(GL_QUADS); // North wall glTexCoord2f(0, 0); glVertex3f(-gridSize, floorHeight, -gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(gridSize, floorHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(gridSize, ceilingHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(-gridSize, ceilingHeight, -gridSize); // West wall glTexCoord2f(0, 0); glVertex3f(-gridSize, floorHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(-gridSize, ceilingHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(-gridSize, ceilingHeight, +gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(-gridSize, floorHeight, +gridSize); // East wall glTexCoord2f(0, 0); glVertex3f(+gridSize, floorHeight, -gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(+gridSize, floorHeight, +gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(+gridSize, ceilingHeight, +gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(+gridSize, ceilingHeight, -gridSize); // South wall glTexCoord2f(0, 0); glVertex3f(-gridSize, floorHeight, +gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(-gridSize, ceilingHeight, +gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(+gridSize, ceilingHeight, +gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(+gridSize, floorHeight, +gridSize); glEnd(); glEndList(); int floorDisplayList = glGenLists(1); glNewList(floorDisplayList, GL_COMPILE); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex3f(-gridSize, floorHeight, -gridSize); glTexCoord2f(0, gridSize * 10 * tileSize); glVertex3f(-gridSize, floorHeight, gridSize); glTexCoord2f(gridSize * 10 * tileSize, gridSize * 10 * tileSize); glVertex3f(gridSize, floorHeight, gridSize); glTexCoord2f(gridSize * 10 * tileSize, 0); glVertex3f(gridSize, floorHeight, -gridSize); glEnd(); glEndList();

    Read the article

  • Windows 8/Surface Lunch Event Summary

    - by Tim Murphy
    Today was a big day for Microsoft with two separate launch event.  The first for Windows 8 and all of it’s hardware partners.  The second was specifically to introduce the Microsoft Windows 8 Surface tablet.  Below are some of the take-aways I got from the webcasts. Windows 8 Launch The three general area that Microsoft focused on were the release of the OS itself, the public unveiling of the Windows Store and the new devices available from its hardware partners. The release of the OS focused on the fact that it will be available at mid-night tonight for both new PCs and for upgrades.  I can’t say that this interested me that much since it was already known to most people.  I think what they did show well was how easy the OS really is to use. The Windows Store is also not a new feature to those of us who have been running the pre-release versions of Windows 8 or have owned Windows Phone 7 for the past 2 years.  What was interesting is that the Windows Store launches with more apps available than any other platforms store at their respective launch.  I think this says a lot about how Microsoft focuses on the ability of developers to create software and make it available.  The of course were sure to emphasize that the Windows Store has better monetary terms for developers than its competitors. The also showed off the fact that XBox Music streaming is available for to all Windows 8 user for free.  Couple this with the Bing suite of apps that give you news, weather, sports and finance right out of the box and I think most people will find the environment a joy to use. I think the hardware demo, while quick and furious, really show where Windows shine: CHOICE!  They made a statement that over 1000 devices have been certified for Windows 8.  They showed tablets, laptops, desktops, all-in-ones and convertibles.  Since these devices have industry standard connectors they give a much wider variety of accessories and devices that you can use with them. Steve Balmer then came on stage and tried to see how many times he could use the “magical”.  He focused on how the Windows 8 OS is designed to integrate with SkyDrive, Skype and Outlook.com.  He also enforced that they think Windows 8 is the best choice for the Enterprise when it comes to protecting data and integrating across devices including Windows Phone 8. With that we were left to wait for the second event of the day. Surface Launch The second event of the day started with kids with magnets.  Ok, they were adults, but who doesn’t like playing with magnets.  Steven Sinofsky detached and reattached the Surface keyboard repeatedly, clearly enjoying himself.  It turns out that there are 4 magnets in the cover, 2 for alignment and 2 as connectors. They then went to giving us the details on the display.  The 10.6” display is optically bonded to the case and is optimized to reduce glare.  I think this came through very well in the demonstrations. The properties of the case were also a great selling point.  The VaporMg allowed them to drop the device on stage, on purpose, and continue working.  Of course they had to bring out the skate boards made from Surface devices. “It just has to feel right” was the reason they gave for many of their design decisions from the weight and size of the device to the way the kickstand and camera work together.  While this gave you the feeling that the whole process was trial and error you could tell that a lot of science went into the specs.  This included making sure that the magnets were strong enough to hold the cover on and still have a 3 year old remove the cover without effort. I am glad that they also decided the a USB port would be part of the spec since it give so many options.  They made the point that this allows Surface to leverage over 420 million existing devices.  That works for me. The last feature that I really thought was important was the microSD port.  Begin stuck with the onboard memory has been an aggravation of mine with many of the devices in the market today. I think they did job of really getting the audience to understand why you want this platform and this particular device.  Using personal examples like creating a video of a birthday party and being in it or the fact that the device was being used to live blog the event and control the lights and presentation.  They showed very well that it was not only fun but very capable of getting real work done.  Handing out tablets to the crowd didn’t hurt either.  In the end I really wanted a Surface even though I really have no need for one on a daily basis.  Great job Microsoft! del.icio.us Tags: Windows 8,Win8,Windows 8 Luanch

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Simple BizTalk Orchestration & Port Tutorial

    - by bosuch
    (This is a reference for a lunch & learn I'm giving at my company) This demo will create a BizTalk process that monitors a directory for an XML file, loads it into an orchestration, and drops it into a different directory. There’s no real processing going on (other than moving the file from one location to another), but this will introduce you to Messages, Orchestrations and Ports. To begin, create a new BizTalk Project names OrchestrationPortDemo: When the solution has been created, right-click the OrchestrationPortDemo solution name and select Add -> New Item. Add a BizTalk Orchestration named DemoOrchestration: Click Add and the orchestration will be created and displayed in the BizTalk Orchestration Designer. The designer allows you to visually create your business processes: Next, you will add a message (the basic unit of communication) to the orchestration. In the Orchestration View, right-click Messages and select New Message. In the message properties window, enter DemoMessage as the Identifier (the name), and select .NET Classes -> System.Xml.XmlDocument for Message Type. This indicates that we’ll be passing a standard Xml document in and out of the orchestration. Next, you will add Send and Receive shapes to the orchestration. From the toolbox, drag a Receive shape onto the orchestration (where it says “Drop a shape from the toolbox here”). Next, drag a Send shape directly below the Receive shape. For the properties of both shapes, select DemoMessage for Message – this indicates we’ll be passing around the message we created earlier. The Operation box will have a red exclamation mark next to it because no port has been specified. We will do this in a minute. On the Receive shape properties, you must be sure to select True for Activate. This indicates that the orchestration will be started upon receipt of a message, rather than being called by another orchestration. If you leave it set to false, when you try to build the application you’ll receive the error “You must specify at least one already-initialized correlation set for a non-activation receive that is on a non self-correlating port.” Now you’ll add ports to the orchestration. Ports specify how your orchestration will send and receive messages. Drag a port from the toolbox to the left-hand Port Surface, and the Port Configuration Wizard launches. For the first port (the receive port), enter the following information: Name: ReceivePort Select the port type to be used for this port: Create a new Port Type Port Type Name: ReceivePortType Port direction of communication: I’ll always be receiving <…> Port binding: Specify later By choosing “Specify later” you are choosing to bind the port (choose where and how it will send or receive its messages) at deployment time via the BizTalk Server Administration console. This allows you to change locations later without building and re-deploying the application. Next, drag a port to the right-hand Port Surface; this will be your send port. Configure it as follows: Name: SendPort Select the port type to be used for this port: Create a new Port Type Port Type Name: SendPortType Port direction of communication: I’ll always be sending <…> Port binding: Specify later Finally, drag the green arrow on the ReceivePort to the Receive_1 shape, and the green arrow on the SendPort to the Send_1 shape. Your orchestration should look like this: Now you have a couple final steps before building and deploying the application. In the Solution Explorer, right-click on OrchestrationPortDemo and select Properties. On the Signing tab, click “Sign the assembly”, and choose <New…> from the drop-down. Enter DemoKey as the Key file name, and deselect “Protect my key file with a password”. This will create the file DemoKey.snk in your solution. Signing the assembly gives it a strong name so that it can be deployed into the global assembly cache (GAC). Next, click the Deployment tab, and enter OrchestrationPortDemo as the Application Name. Save your solution. Click “Build OrchestrationPortDemo”. Your solution should (hopefully!) build with no errors. Click “Deploy OrchestrationPortDemo”. (Note – If you’re running Server 2008, Vista or Win7, you may get an error message. If so, close Visual Studio and run it as an administrator) That’s it! Your application is ready to be configured and fired up in the BizTalk Server Administration console, so stay tuned!

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • Flat File Connection Manager in SSIS package shows "Valid File Name Must be Selected"

    - by Traples
    (Flat File Location) Samba Share | Windows Share (SSIS) _______________________________ | | XP 32bit | Works | Works | | 2003 Serv 32bit | Works | Works | | Vista 64bit | ERROR | Works | | Win 7 64bit | ERROR | Works | | 2008 Serv 64bit | ERROR | Works I created an SSIS package in VS 2008 that parses a flat file from a shared folder and puts the records into a SQL Server db. I recently installed Windows 7 and VS 2008 on a new workstation. When I import the package from TFS and open it, I get the error Validation error. Parse and Import Catalog Flat File: MySSISPackage: The file name "\\shared\flatfile.txt" specified in the connection was not valid. When I open the Flat File Connection Manager Editor, there is an error stating: A valid file name must be selected I can browse to and select the file from inside the editor, but I cannot change any properties, or move away from the General tab because of this error. If I go back to my laptop (Windows XP), where the package was first created, there is no error. Both workstations are on the same domain, and I'm logging in using the same credentials. Any ideas as to why I would receive this error from one workstation and not another? UPDATE: If I take the .dtsx package from the running workstation and load it into SSIS on the server, I get the following errors when it tries to run: Error: The file name "\\shared\flatfile.txt" specified in the connection was not valid. and... Error: Connection "MySSISPackage" failed validation. and... Error: The file name property is not valid. The file name is a device or contains invalid characters. UPDATE 2: a) The Shared folder I'm trying to pull the flat file from is a Samba share on a Unix box. b) If I access the file using SSIS on any 64-bit platform (Windows 7 64-bit, Vista 64-bit, Windows Server 2008) I get the error "A valid file name must be selected." c) Accessing the file using SSIS from 32-bit environments (Windows XP 32-bit, Windows Server 2003 32-bit) there is no problem. d) If I move the file to a shared folder on a Windows server, 64-bit SSIS recognizes the file.

    Read the article

  • How to send email from an EC2 instance using GoDaddy's SMTP server?

    - by Matt Greer
    SMTP is a whole new ballgame for me, but I am reading up on it. I am attempting to send email from my EC2 instance using GoDaddy's SMTP server. My domain name is registered through GoDaddy and I have 2 email accounts with them. I can successfully send the email from my dev box no problem. my web.config <system.net> <mailSettings> <smtp from="[email protected]" deliveryMethod="Network"> <network host="smtpout.secureserver.net" clientDomain="mydomain.com" port="25" userName="[email protected]" password="mypassword" defaultCredentials="false" /> </smtp> </mailSettings> </system.net> In my ASP.NET app: MailMessage mailMessage = new MailMessage("[email protected]", recipientEmail, emailSubject, body); mailMessage.IsBodyHtml = false; SmtpClient mailClient = new SmtpClient(); mailClient.Send(mailMessage); Very typical, simple use of System.Net.Mail.SmtpClient. The mail client is picking up the settings from my web.config as expected. From the EC2 instance, the same setup yields: System.Net.Mail.SmtpException: Failure sending mail. ---> System.IO.IOException: Unable to read data from the transport connection: net_io_connectionclosed. at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller) at System.Net.Mail.SmtpConnection.GetConnection(ServicePoint servicePoint) at System.Net.Mail.SmtpClient.Send(MailMessage message) --- End of inner exception stack trace --- I have searched high and low and not found anyone else attempting this. All GoDaddy smtp situations I have found involve people being hosted by GoDaddy using their relay server. Some more info: My EC2 instance is Windows Server 2008 with IIS 7. The app is running in .NET 4 I can successfully use Gmail's SMTP server on the EC2 instance by using their port, setting SmtpClient.EnableSsl to true, and sending the mail through a gmail account. But we want to send the email from an account on our domain. I have port 25 open on both the Windows firewall and Amazon's Security group based firewall. I have played with Wireshark and noticed my SMTP related traffic was talking to ports in the 5,000s, so out of desperation I opened them all up to no avail (then closed them back down) As far as I know my EC2 instance's IP address is not black listed by GoDaddy. I have a feeling I'm just missing something fundamental. I also have a feeling someone is going to recommend I use AuthSmtp or something similar, I'll agree, and have had wasted the past 6 hours :)

    Read the article

  • Attempting to update multiple partial views from a single Ajax.ActionLink

    - by mwright
    I have a a partial view which contains other partial views. I am trying to the main partial view ( "MainPartialView" ) from an Ajax.ActionLink in a partial view contained by the main partial view ( "DetailsView" ). Everything appears to be called just fine and I can step through and it executes all of the code on the pages. However, after that is all done it throws this error in a popup box in visual studio: htmlfile: Unknown runtime error This error puts the break point in the MicrosoftAjax.js file, Line 5, Col 83,632, Ch 83632. Any thoughts? Index Page: <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script> <ul> <% foreach (DomainObject domainObject in Model) { %> <% Html.RenderPartial("MainPartialView", domainObject); %> <% } %> </ul> MainPartialView: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<DomainObject>" %> <li> <div id="<%= Model.Id%>"> <%= Ajax.ActionLink("Details", "PartialViewAction", "PartialViewController", new { id = Model.Id, }, new AjaxOptions { UpdateTargetId ="UpdateTargetId" })%> <% Html.RenderPartial("Details", Model); %> <div id="Details"></div> </div> </li> Details: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<DomainObject>" %> <% foreach ( var link in Model.Links) {%> <div> <div> <%= link.Name %> </div> <div> <%= Ajax.ActionLink("Submit this Action", "DoAction", "XTrademark", new { id = Model.TrademarkId, id2 = actionStateLink.ActionStateLinkId }, new AjaxOptions{ UpdateTargetId = Model.TrademarkId.ToString()} )%> </div> </div> <br /> <%} %>

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC netwo

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); } This is the ErrorStack we are getting when the problem occurs: at System.Transactions.TransactionInterop.GetOletxTransactionFromTransmitterPropigationToken(Byte[] propagationToken) at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx) at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx) at System.Transactions.EnlistableStates.Promote(InternalTransaction tx) at System.Transactions.Transaction.Promote() at System.Transactions.TransactionInterop.ConvertToOletxTransaction(Transaction transaction) at System.Transactions.TransactionInterop.GetExportCookie(Transaction transaction, Byte[] whereabouts) at System.Data.SqlClient.SqlInternalConnection.GetTransactionCookie(Transaction transaction, Byte[] whereAbouts) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.Linq.SqlClient.SqlConnectionManager.UseConnection(IConnectionUser user) at System.Data.Linq.SqlClient.SqlProvider.get_IsSqlCe() at System.Data.Linq.SqlClient.SqlProvider.InitializeProviderMode() at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.ChangeDirector.StandardChangeDirector.DynamicInsert(TrackedObject item) at System.Data.Linq.ChangeDirector.StandardChangeDirector.Insert(TrackedObject item) at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges() at RegBook.classes.DbBase.Save() at RegBook.usercontrols.BookingProcess.confirmBookingButton_Click(Object sender, EventArgs e)

    Read the article

  • Silverlight and Unexpected Font Sizes

    - by Eric J.
    Someone please teach me to fish here... I'm just learning Silverlight and have ran into a few situations where the font size actually used is drastically different than I would expect. There's probably something conceptual that I'm missing. Case A In one instance, I have defined a user control that presents a Label to show text. If one clicks on the label, the label (that is in a stack panel, in the user control) is replaced with a TextBox. When used at the top of a page (as in the example below with lblName) the label text is very small (around 8 points). When clicked on, the text box that replaces the label uses the specified fonts size. That same user control, used in different parts of the app, uses the same font for Label and TextBox. <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition Height="33" /> <RowDefinition Height="267*" /> </Grid.RowDefinitions> <StackPanel Height="Auto" HorizontalAlignment="Left" Name="stackPanel" VerticalAlignment="Top" Width="Auto" Grid.Row="1" /> <my:EditLabel Height="33" HorizontalAlignment="Left" x:Name="lblName" VerticalAlignment="Top" Width="Auto" FlexText="{Binding Name, Mode=TwoWay}" FontSize="20" MinHeight="24" /> </Grid> Case B I'm using the LiquidMenu.Menu control to pop up a menu when a button is pressed. The font looks huge compared to the rest of my page (maybe 36 points?). I tried forcing it to a very small by explicitly setting it to 8pt, but that had no effect. <Grid x:Name="LayoutRoot" Background="{x:Null}"> <StackPanel x:Name="labelStackPanel" Orientation="Horizontal"> <TextBlock Height="24" HorizontalAlignment="Left" Name="labelText" VerticalAlignment="Top" Width="200" Text="(Value Goes Here)" /> </StackPanel> <liquidMenu:Menu x:Name="popupMenu" Canvas.Left="40" Canvas.Top="40" ItemSelected="MenuList_ItemSelected" Visibility="Collapsed" Height="Auto" FontSize="8"> <liquidMenu:MenuItem ID="delete" Icon="Images/Delete10.png" Text="Delete" Shortcut="Del" /> <liquidMenu:MenuItem ID="exclusive" Icon="" Text="Exclusive" Shortcut="Ctrl+E" /> <liquidMenu:MenuItem ID="properties" Icon="" Text="Properties" Shortcut="Ctrl+P" /> </liquidMenu:Menu> </Grid> Answers to these specific issues are great, a new way to think about this type of issue so that I understand how to control font size is better.

    Read the article

  • Attempting to update partial view using Ajax.ActionLink gives error in MicrosoftAjax.js

    - by mwright
    I am trying to update the partial view ( "OnlyPartialView" ) from an Ajax.ActionLink which is in the same partial view. While executing the foreach loop it throws this error in a popup box in visual studio: htmlfile: Unknown runtime error This error puts the break point in the MicrosoftAjax.js file, Line 5, Col 83,632, Ch 83632. The page is not updated appropriately. Any thoughts or ideas on how I could troubleshoot this? It was previously nested partial views, I've simplified it for this example but this code produces the same error. Is there a better way to do what I am trying to do? Index Page: <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script> <ul> <% foreach (DomainObject domainObject in Model) { %> <% Html.RenderPartial("OnlyPartialView", domainObject); %> <% } %> </ul> OnlyPartialView: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<ProjectName.Models.DomainObject>" %> <%@ Import Namespace="ProjectName.Models"%> <li> <div id="<%=Model.Id%>"> //DISPLAY ATTRIBUTES </div> <div id="<%= Model.Id %>ActionStateLinks"> <% foreach ( var actionStateLink in Model.States[0].State.ActionStateLinks) {%> <div id="Div1"> <div> <%= actionStateLink.Action.Name %> </div> <div> <%= Ajax.ActionLink("Submit this Action", "DoAction", "ViewController", new { id = Model.Id, id2 = actionStateLink.ActionStateLinkId }, new AjaxOptions{ UpdateTargetId = Model.Id.ToString()} )%> </div> </div> <br /> <%} %> </div> </li> Controller: public ActionResult DoAction(Guid id, Guid id2) { DomainObject domainObject = _repository.GetDomainObject(id); ActionStateLink actionStateLink = _repository.GetActionStateLink(id2); domainObject.States[0].StateId = actionStateLink.FollowingStateId; repository.AddDomainObjectAction(domainObject, actionStateLink, DateTime.Now); _repository.Save(); return PartialView("OnlyPartialView", _repository.GetDomainObject(id)); }

    Read the article

  • Draw a Custom cell for tableview ( uitableview ) , with changed colors and separator color and width

    - by Madhup
    Hi, I want to draw the background of a UITableViewCell which has a grouped style. The problem with me is I am not able to call the -(void)drawRect:(CGRect)rect or I think it should be called programmatically... I have taken code from following link . http://stackoverflow.com/questions/400965/how-to-customize-the-background-border-colors-of-a-grouped-table-view/1031593#1031593 // // CustomCellBackgroundView.h // // Created by Mike Akers on 11/21/08. // Copyright 2008 __MyCompanyName__. All rights reserved. // #import <UIKit/UIKit.h> typedef enum { CustomCellBackgroundViewPositionTop, CustomCellBackgroundViewPositionMiddle, CustomCellBackgroundViewPositionBottom, CustomCellBackgroundViewPositionSingle } CustomCellBackgroundViewPosition; @interface CustomCellBackgroundView : UIView { UIColor *borderColor; UIColor *fillColor; CustomCellBackgroundViewPosition position; } @property(nonatomic, retain) UIColor *borderColor, *fillColor; @property(nonatomic) CustomCellBackgroundViewPosition position; @end // // CustomCellBackgroundView.m // // Created by Mike Akers on 11/21/08. // Copyright 2008 __MyCompanyName__. All rights reserved. // #import "CustomCellBackgroundView.h" static void addRoundedRectToPath(CGContextRef context, CGRect rect, float ovalWidth,float ovalHeight); @implementation CustomCellBackgroundView @synthesize borderColor, fillColor, position; - (BOOL) isOpaque { return NO; } - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } - (void)drawRect:(CGRect)rect { // Drawing code CGContextRef c = UIGraphicsGetCurrentContext(); CGContextSetFillColorWithColor(c, [fillColor CGColor]); CGContextSetStrokeColorWithColor(c, [borderColor CGColor]); CGContextSetLineWidth(c, 2.0); if (position == CustomCellBackgroundViewPositionTop) { CGFloat minx = CGRectGetMinX(rect) , midx = CGRectGetMidX(rect), maxx = CGRectGetMaxX(rect) ; CGFloat miny = CGRectGetMinY(rect) , maxy = CGRectGetMaxY(rect) ; minx = minx + 1; miny = miny + 1; maxx = maxx - 1; maxy = maxy ; CGContextMoveToPoint(c, minx, maxy); CGContextAddArcToPoint(c, minx, miny, midx, miny, ROUND_SIZE); CGContextAddArcToPoint(c, maxx, miny, maxx, maxy, ROUND_SIZE); CGContextAddLineToPoint(c, maxx, maxy); // Close the path CGContextClosePath(c); // Fill & stroke the path CGContextDrawPath(c, kCGPathFillStroke); return; } else if (position == CustomCellBackgroundViewPositionBottom) { CGFloat minx = CGRectGetMinX(rect) , midx = CGRectGetMidX(rect), maxx = CGRectGetMaxX(rect) ; CGFloat miny = CGRectGetMinY(rect) , maxy = CGRectGetMaxY(rect) ; minx = minx + 1; miny = miny ; maxx = maxx - 1; maxy = maxy - 1; CGContextMoveToPoint(c, minx, miny); CGContextAddArcToPoint(c, minx, maxy, midx, maxy, ROUND_SIZE); CGContextAddArcToPoint(c, maxx, maxy, maxx, miny, ROUND_SIZE); CGContextAddLineToPoint(c, maxx, miny); // Close the path CGContextClosePath(c); // Fill & stroke the path CGContextDrawPath(c, kCGPathFillStroke); return; } else if (position == CustomCellBackgroundViewPositionMiddle) { CGFloat minx = CGRectGetMinX(rect) , maxx = CGRectGetMaxX(rect) ; CGFloat miny = CGRectGetMinY(rect) , maxy = CGRectGetMaxY(rect) ; minx = minx + 1; miny = miny ; maxx = maxx - 1; maxy = maxy ; CGContextMoveToPoint(c, minx, miny); CGContextAddLineToPoint(c, maxx, miny); CGContextAddLineToPoint(c, maxx, maxy); CGContextAddLineToPoint(c, minx, maxy); CGContextClosePath(c); // Fill & stroke the path CGContextDrawPath(c, kCGPathFillStroke); return; } else if (position == CustomCellBackgroundViewPositionSingle) { CGFloat minx = CGRectGetMinX(rect) , midx = CGRectGetMidX(rect), maxx = CGRectGetMaxX(rect) ; CGFloat miny = CGRectGetMinY(rect) , midy = CGRectGetMidY(rect) , maxy = CGRectGetMaxY(rect) ; minx = minx + 1; miny = miny + 1; maxx = maxx - 1; maxy = maxy - 1; CGContextMoveToPoint(c, minx, midy); CGContextAddArcToPoint(c, minx, miny, midx, miny, ROUND_SIZE); CGContextAddArcToPoint(c, maxx, miny, maxx, midy, ROUND_SIZE); CGContextAddArcToPoint(c, maxx, maxy, midx, maxy, ROUND_SIZE); CGContextAddArcToPoint(c, minx, maxy, minx, midy, ROUND_SIZE); // Close the path CGContextClosePath(c); // Fill & stroke the path CGContextDrawPath(c, kCGPathFillStroke); return; } } - (void)dealloc { [borderColor release]; [fillColor release]; [super dealloc]; } @end static void addRoundedRectToPath(CGContextRef context, CGRect rect, float ovalWidth,float ovalHeight) { float fw, fh; if (ovalWidth == 0 || ovalHeight == 0) {// 1 CGContextAddRect(context, rect); return; } CGContextSaveGState(context);// 2 CGContextTranslateCTM (context, CGRectGetMinX(rect),// 3 CGRectGetMinY(rect)); CGContextScaleCTM (context, ovalWidth, ovalHeight);// 4 fw = CGRectGetWidth (rect) / ovalWidth;// 5 fh = CGRectGetHeight (rect) / ovalHeight;// 6 CGContextMoveToPoint(context, fw, fh/2); // 7 CGContextAddArcToPoint(context, fw, fh, fw/2, fh, 1);// 8 CGContextAddArcToPoint(context, 0, fh, 0, fh/2, 1);// 9 CGContextAddArcToPoint(context, 0, 0, fw/2, 0, 1);// 10 CGContextAddArcToPoint(context, fw, 0, fw, fh/2, 1); // 11 CGContextClosePath(context);// 12 CGContextRestoreGState(context);// 13 } but the problem is my drawRect is not getting called automatically......... I am doing it like this. CustomCellBackgroundView *custView = [[CustomCellBackgroundView alloc] initWithFrame:CGRectMake(0,0,320,44)]; [cell setBackgroundView:custView]; [custView release]; and doing this gives me transparent cell. I tried and fought with code but could get any results. Please help me out. I am really having no idea how this code will run.

    Read the article

  • .Net Crystal Report printing application running on termianal service connection errors when session

    - by MrEdmundo
    I have created a .Net application to run on an App Server that gets requests for a report and prints out the requested report. The C# application uses Crystal Reports to load the report and subsequently print it out. The application is run on Server which is connected to via a Remote Desktop connection under a particular user account (required for old apps). When I disconnect from the Remote Session the application starts raising exceptions such as: Message: CrystalDecisions.Shared.CrystalReportsException: Load report failed This type of error is never raised when the Remote Session is active. The server running the app is running Windows Server 2003, my box which creates the connection is Windows XP. I appreciate this is fairly weird, however I cannot see any problem with the application deployment I have created. Does anyone know what could be cause this issue? EDIT: I bit the bullet and created the application as a windows service, obviously this doesn't take long I just wasn't convinced it would solve the problem. Anyway it doesn't!!! I have also tried removing the multi-thread code that was calling the print function asynchronously. I did this in order to simply the app and narrow down the reason it could fail. Anyway, this didn't improve the situation either! EDIT: The two errors I get are: System.Runtime.InteropServices.COMException (0x80000201): Invalid printer specified. at CrystalDecisions.ReportAppServer.Controllers.PrintOutputControllerClass.ModifyPrinterName(String newVal) at CrystalDecisions.CrystalReports.Engine.PrintOptions.set_PrinterName(String value) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) The printer isn't invalid, this is confirmed when 60 seconds later the time ticks and the report is printed successfully. And The request could not be submitted for background processing. at CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.GetLastPageNumber(RequestContext pRequestContext) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) --- End of inner exception stack trace --- at CrystalDecisions.ReportAppServer.ConvertDotNetToErom.ThrowDotNetException(Exception e) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) at CrystalDecisions.CrystalReports.Engine.FormatEngine.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at CrystalDecisions.CrystalReports.Engine.ReportDocument.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) EDIT: I ran filemon to check if there were any access issue. At the point when the error occurs file mon reports Request: OPEN | Path: C:\windows\assembly\gac_msil\system\2.0.0.0__b77a5c561934e089\ws2_32.dll | Result: NOT FOUND | Other: Attributes Error

    Read the article

  • Using Joda DateTime as a Jersey parameter?

    - by HolySamosa
    I'd like to use Joda's DateTime for query parameters in Jersey, but this isn't supported by Jersey out-of-the-box. I'm assuming that implementing an InjectableProvider is the proper way to add DateTime support. Can someone point me to a good implementation of an InjectableProvider for DateTime? Or is there an alternative approach worth recommending? (I'm aware I can convert from Date or String in my code, but this seems like a lesser solution). Thanks. Solution: I modified Gili's answer below to use the @Context injection mechanism in JAX-RS rather than Guice. import com.sun.jersey.core.spi.component.ComponentContext; import com.sun.jersey.spi.inject.Injectable; import com.sun.jersey.spi.inject.PerRequestTypeInjectableProvider; import java.util.List; import javax.ws.rs.QueryParam; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.Context; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import javax.ws.rs.core.UriInfo; import javax.ws.rs.ext.Provider; import org.joda.time.DateTime; /** * Enables DateTime to be used as a QueryParam. * <p/> * @author Gili Tzabari */ @Provider public class DateTimeInjector extends PerRequestTypeInjectableProvider<QueryParam, DateTime> { private final UriInfo uriInfo; /** * Creates a new DateTimeInjector. * <p/> * @param uriInfo an instance of {@link UriInfo} */ public DateTimeInjector( @Context UriInfo uriInfo) { super(DateTime.class); this.uriInfo = uriInfo; } @Override public Injectable<DateTime> getInjectable(final ComponentContext cc, final QueryParam a) { return new Injectable<DateTime>() { @Override public DateTime getValue() { final List<String> values = uriInfo.getQueryParameters().get(a.value()); if( values == null || values.isEmpty()) return null; if (values.size() > 1) { throw new WebApplicationException(Response.status(Status.BAD_REQUEST). entity(a.value() + " may only contain a single value").build()); } return new DateTime(values.get(0)); } }; } }

    Read the article

  • Render custom form / alter existing rendering template at runtime.

    - by Janis Veinbergs
    How do I create reusable custom new item form + preferrably, i don't want to tie this form to content type? I want to force render one hidden field (it could be render on the page, but make invisible or render on the page and display) and set field value programmatically (that's why it has to be rendered - to set it's value). Google has tons of information on how to create custom list form with sharepoint designer, but in my case, i don't want sharepoint designer for the advantages you see below. What i'm trying to achieve I want to be able to have a custom newform to create item (i don't want it to be as default). To open this newForm i would use CustomAction in item's ECB menu. In this form, i want to force render one hidden field and set it's value programmatically. I want to open this form from CustomAction ECB (item's context menu), so i don't want to set this as a default New form template for content type. <XmlDocument NamespaceURI="http://schemas.microsoft.com/sharepoint/v3/contenttype/forms"> <FormTemplates xmlns="http://schemas.microsoft.com/sharepoint/v3/contenttype/forms"> <New>ListForm</New> </FormTemplates> </XmlDocument> Idea #1 I could create custom RenderingTemplate and set Content type's new form template to my newly created template. For example, OOTB ListForm rendering template: <SharePoint:RenderingTemplate ID="ListForm" runat="server"> <Template> <SPAN id='part1'> <SharePoint:InformationBar runat="server"/> <wssuc:ToolBar CssClass="ms-formtoolbar" id="toolBarTbltop" RightButtonSeparator="&nbsp;" runat="server"> <Template_RightButtons> <SharePoint:NextPageButton runat="server"/> <SharePoint:SaveButton runat="server"/> <SharePoint:GoBackButton runat="server"/> </Template_RightButtons> </wssuc:ToolBar> <SharePoint:FormToolBar runat="server"/> <TABLE class="ms-formtable" style="margin-top: 8px;" border=0 cellpadding=0 cellspacing=0 width=100%> <SharePoint:ChangeContentType runat="server"/> <SharePoint:FolderFormFields runat="server"/> <SharePoint:ListFieldIterator runat="server" /> <SharePoint:ApprovalStatus runat="server"/> <SharePoint:FormComponent TemplateName="AttachmentRows" runat="server"/> </TABLE> <table cellpadding=0 cellspacing=0 width=100%><tr><td class="ms-formline"><IMG SRC="/_layouts/images/blank.gif" width=1 height=1 alt=""></td></tr></table> <TABLE cellpadding=0 cellspacing=0 width=100% style="padding-top: 7px"><tr><td width=100%> <SharePoint:ItemHiddenVersion runat="server"/> <SharePoint:ParentInformationField runat="server"/> <SharePoint:InitContentType runat="server"/> <wssuc:ToolBar CssClass="ms-formtoolbar" id="toolBarTbl" RightButtonSeparator="&nbsp;" runat="server"> <Template_Buttons> <SharePoint:CreatedModifiedInfo runat="server"/> </Template_Buttons> <Template_RightButtons> <SharePoint:SaveButton runat="server"/> <SharePoint:GoBackButton runat="server"/> </Template_RightButtons> </wssuc:ToolBar> </td></tr></TABLE> </SPAN> <SharePoint:AttachmentUpload runat="server"/> </Template> </SharePoint:RenderingTemplate> I only need such a minor change: <SharePoint:RenderingTemplate ID="NewRelatedListItemTemplate" runat="server"> ... <SharePoint:ListFieldIterator TemplateName="ListItemFormFieldsWithRelatedItems" runat="server" /> .. </SharePoint:RenderingTemplate> <SharePoint:RenderingTemplate ID="ListItemFormFieldsWithRelatedItems" runat="server"> <Template> <Balticovo:ListFieldIteratorExtended IncludeFields="RelatedItems" runat="server"/> </Template> </SharePoint:RenderingTemplate> Advantages over manual (SPD) custom forms In this way form is not "constant/static". If new list fields are added to list or content type afterwards, my custom form will render them (the ListFieldIterator will do it). Idea #2 Could it be that i modify existing RenderingTemplate at runtime? I would take "new forms" template (Named, for example, ListForm or it could be other than default ListForm) with SPControlTemplateManager.GetTemplateByName("ListForm") Find ListIterator control and add property TemplateName="ListItemFormFieldsWithRelatedItems" Render this template and return it? In short, i would like to create RenderingTemplate programmatically, on-the-fly and then use this template to render list's new form. Advantages I get the advantage of Idea 1 + This way i would get a bonus even if Template changes (from ListForm to CompanyCustomListForm) and my custom form won't loose my implemented functionality if i choose to change content type's rendering template later on (i can create other features not trying to rembeer to reimplement this particular stuff or other 3rd party features won't override my functionality if they use custom forms - loose coupling is it?). Now, is this (Idea #2) possible...?

    Read the article

  • TinyMCE with AJAX (Update Panel) never has a value.

    - by sah302
    I wanted to use a Rich Text Editor for a text area inside an update panel. I found this post: http://www.queness.com/post/212/10-jquery-and-non-jquery-javascript-rich-text-editors via this question: http://stackoverflow.com/questions/1207382/need-asp-net-mvc-rich-text-editor Decided to go with TinyMCE as I used it before in non AJAX situations, and it says in that list it is AJAX compatible. Alright I do the good ol' tinyMCE.init({ //settings here }); Test it out and it disappears after doing a update panel update. I figure out from a question on here that it should be in the page_load function so it gets run even on async postbacks. Alright do that and the panel stays. However, upon trying to submit the value from my textarea, the text of it always comes back as empty because my form validator always says "You must enter a description" even when I enter text into it. This happens the first time the page loads and after async postbacks have been done to the page. Alright I find this http://www.dallasjclark.com/using-tinymce-with-ajax/ and http://stackoverflow.com/questions/699615/cant-post-twice-from-the-same-ajax-tinymce-textarea. I try to add this code into my page load function right after the tinyMCE.init. Doing this breaks all my jquery being called also in the page_load after it, and it still has the same problem. I am still pretty beginner to client side scripting stuff, so maybe I need to put the code in a different spot than page_load? Not sure the posts I linked weren't very clue on where to put that code. My Javascript: <script type="text/javascript"> var redirectUrl = '<%= redirectUrl %>'; function pageLoad() { tinyMCE.init({ mode: "exact", elements: "ctl00_mainContent_tbDescription", theme: "advanced", plugins: "table,advhr,advimage,iespell,insertdatetime,preview,searchreplace,print,contextmenu,paste,fullscreen", theme_advanced_buttons1_add_before: "preview,separator", theme_advanced_buttons1: "bold,italic,underline,separator,justifyleft,justifycenter,justifyright, justifyfull,bullist,numlist,undo,redo,link,unlink,separator,styleselect,formatselect", theme_advanced_buttons2: "cut,copy,paste,pastetext,pasteword,separator,removeformat,cleanup,charmap,search,replace,separator,iespell,code,fullscreen", theme_advanced_buttons2_add_before: "", theme_advanced_buttons3: "", theme_advanced_toolbar_location: "top", theme_advanced_toolbar_align: "left", extended_valid_elements: "a[name|href|target|title|onclick],img[class|src|border=0|alt|title|hspace|vspace|width|height|align|onmouseover|onmouseout|name],hr[class|width|size|noshade],font[face|size|color|style],span[class|align|style]", paste_auto_cleanup_on_paste: true, paste_convert_headers_to_strong: true, button_tile_map: true }); tinyMCE.triggerSave(false, true); tiny_mce_editor = tinyMCE.get('ctl00_mainContent_tbDescription'); var newData = tiny_mce_editor.getContent(); tinyMCE.execCommand('mceRemoveControl', false, 'your_textarea_name'); //QJqueryUI dialog stuff }</script> Now my current code doesn't have the tinyMCE.execCommand("mceAddControl",true,'content'); which that one question indicated should also be added. I did try adding it but, again, wasn't sure where to put it and just putting it in the page_load seemed to have no effect. Textbox control: <asp:TextBox ID="tbDescription" runat="server" TextMode="MultiLine" Width="500px" Height="175px"></asp:TextBox><br /> How can I get these values so that the code behind can actually get what is typed in the textarea and my validator won't come up as saying it's empty? Even after async postbacks, since I have multiple buttons on the form that update it prior to actual submission. Thanks! Edit: For further clarification I have form validation on the back-end like so: If tbDescription.Text = "" Or tbDescription.Text Is Nothing Then lblDescriptionError.Text = "You must enter a description." isError = True Else lblDescriptionError.Text = "" End If And this error will always cause the error message to be dispalyed. Edit: Alright I am getting desperate here, I have spent hours on this. I finally found what I thought to be a winner on experts exchange which states the following (there was a part about encoding the value in xml, but I skipped that): For anyone who wants to use tinyMCE with AJAX.Net: Append begin/end handlers to the AJAX Request object. These will remove the tinyMCE control before sending the data (begin), and it will recreate the tinyMCE control (end): Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(function(sender, args) { var edID = "<%=this.ClientID%>_rte_tmce"; // the id of your textbox/textarea. var ed = tinyMCE.getInstanceById(edID); if (ed) { tinyMCE.execCommand('mceFocus', false, edID); tinyMCE.execCommand('mceRemoveControl', false, edID); } }); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(function(sender, args) { var edID = "<%=this.ClientID%>_rte_tmce"; var ed = tinyMCE.getInstanceById(edID); if (ed) { tinyMCE.execCommand('mceAddControl', false, edID); } }); When the user changes/blurs from the tinyMCE control, we want to ensure that the textarea/textbox gets updated properly: ed.onChange.add(function(ed, l) { tinyMCE.triggerSave(true, true); }); Now I have tried this code putting it in its own script tag, putting the begin and end requests into their own script tags and putting the ed.onChange in the page_load, putting everything in the page_load, and putting all 3 in it's own script tag. In all cases it never worked, and even sometimes broke the jquery that is also in my page_load... (and yes I changed the above code to fit my page) Can anyone get this to work or offer a solution?

    Read the article

  • Integrating POP3 client functionality into a C# application?

    - by flesh
    I have a web application that requires a server based component to periodically access POP3 email boxes and retrieve emails. The service then needs to process the emails which will involve: Validating the email against some business rules (does it contain a valid reference in the subject line, which user sent the mail, etc.) Analysing and saving any attachments to disk Take the email body and attachment details and create a new item in the database Or update an existing item where the reference matches the incoming email subject line What is the best way to approach this? I really don't want to have to write a POP3 client from scratch, but I need to be able to customize the processing of emails. Ideally I would be able to plug in some component that does the access and retrieval for me, returning arrays of attachments, body text, subject line, etc. ready for my processing... [ UPDATE: Reviews ] OK, so I have spent a fair amount of time looking into (mainly free) .NET POP3 libraries so I thought I'd provide a short review of some of those mentioned below and a few others: Pop3.net - free - works OK, very basic in terms of functionality provided. This is pretty much just the POP3 commands and some base64 encoding, but it's very straight forward - probably a good introduction Pop3 Wizard - commercial / some open source code - couldn't get this to build, missing DLLs, I wouldn't bother with this C#Mail - free - works well, comes with Mime parser and SMTP client, however the comments are in Japanese (not a big deal) and it didn't work with SSL 'out of the box' - I had to change the SslStream constructor after which it worked no problem OpenPOP - free - hasn't been updated for about 5 years so it's current state is .NET 1.0, doesn't support SSL but that was no problem to resolve - I just replaced the existing stream with an SslStream and it worked. Comes with Mime parser. Of the free libraries, I'd go for C#Mail or OpenPOP. I looked at a few commercial libraries: Chillkat, Rebex, RemObjects, JMail.net. Based on features, price and impression of the company I would probably go for Rebex and may in the future if my requirements change or I run into production issues with either of C#Mail or OpenPOP. In case anyone's needs it, this is the replacement SslStream constructor that I used to enable SSL with C#Mail and OpenPOP: SslStream stream = new SslStream(clientSocket.GetStream(), false, delegate(object sender, X509Certificate cert, X509Chain chain, SslPolicyErrors errors) { return true; });

    Read the article

  • How to send messages between c++ .dll and C# app using named pipe?

    - by Gal
    I'm making an injected .dll written in C++, and I want to communicate with a C# app using named pipes. Now, I am using the built in System.IO.Pipe .net classes in the C# app, and I'm using the regular functions in C++. I don't have much experience in C++ (Read: This is my first C++ code..), tho I'm experienced in C#. It seems that the connection with the server and the client is working, the only problem is the messaged aren't being send. I tried making the .dll the server, the C# app the server, making the pipe direction InOut (duplex) but none seems to work. When I tried to make the .dll the server, which sends messages to the C# app, the code I used was like this: DWORD ServerCreate() // function to create the server and wait till it successfully creates it to return. { hPipe = CreateNamedPipe(pipename,//The unique pipe name. This string must have the following form: \\.\pipe\pipename PIPE_ACCESS_DUPLEX, PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_NOWAIT, //write and read and return right away PIPE_UNLIMITED_INSTANCES,//The maximum number of instances that can be created for this pipe 4096 , // output time-out 4096 , // input time-out 0,//client time-out NULL); if(hPipe== INVALID_HANDLE_VALUE) { return 1;//failed } else return 0;//success } void SendMsg(string msg) { DWORD cbWritten; WriteFile(hPipe,msg.c_str(), msg.length()+1, &cbWritten,NULL); } void ProccesingPipeInstance() { while(ServerCreate() == 1)//if failed { Sleep(1000); } //if created success, wait to connect ConnectNamedPipe(hPipe, NULL); for(;;) { SendMsg("HI!"); if( ConnectNamedPipe(hPipe, NULL)==0) if(GetLastError()==ERROR_NO_DATA) { DebugPrintA("previous closed,ERROR_NO_DATA"); DisconnectNamedPipe(hPipe); ConnectNamedPipe(hPipe, NULL); } Sleep(1000); } And the C# cliend like this: static void Main(string[] args) { Console.WriteLine("Hello!"); using (var pipe = new NamedPipeClientStream(".", "HyprPipe", PipeDirection.In)) { Console.WriteLine("Created Client!"); Console.Write("Connecting to pipe server in the .dll ..."); pipe.Connect(); Console.WriteLine("DONE!"); using (var pr = new StreamReader(pipe)) { string t; while ((t = pr.ReadLine()) != null) { Console.WriteLine("Message: {0}",t); } } } } I see that the C# client connected to the .dll, but it won't receive any message.. I tried doing it the other way around, as I said before, and trying to make the C# send messages to the .dll, which would show them in a message box. The .dll was injected and connected to the C# server, but when It received a message it just crashed the application it was injected to. Please help me, or guide me on how to use named pipes between C++ and C# app

    Read the article

  • Word Spell Check pops up hidden and "freezes" my App

    - by Refracted Paladin
    I am using Word's Spell Check in my in house WinForm app. My clients are all XP machines with Office 2007 and randomly the spell check suggestion box pops up behind the App and makes everything "appear" frozen as you cannot get at it. Suggestions? What do other people do to work around this or stop it altogether? Thanks Below is my code, for reference, though I am doubtful that this has anything to do with my code but I'll take anything. public class SpellCheckers { public string CheckSpelling(string text) { Word.Application app = new Word.Application(); object nullobj = Missing.Value; object template = Missing.Value; object newTemplate = Missing.Value; object documentType = Missing.Value; object visible = false; object optional = Missing.Value; object savechanges = false; app.ShowMe(); Word._Document doc = app.Documents.Add(ref template, ref newTemplate, ref documentType, ref visible); doc.Words.First.InsertBefore(text); Word.ProofreadingErrors errors = doc.SpellingErrors; var ecount = errors.Count; doc.CheckSpelling(ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional, ref optional); object first = 0; object last = doc.Characters.Count - 1; var results = doc.Range(ref first, ref last).Text; doc.Close(ref savechanges, ref nullobj, ref nullobj); app.Quit(ref savechanges, ref nullobj, ref nullobj); Marshal.ReleaseComObject(doc); Marshal.ReleaseComObject(app); Marshal.ReleaseComObject(errors); return results; } } And I call it from my WinForm app like so -- public static void SpellCheckControl(Control control) { if (IsWord2007Available()) { if (control.HasChildren) { foreach (Control ctrl in control.Controls) { SpellCheckControl(ctrl); } } if (IsValidSpellCheckControl(control)) { if (control.Text != String.Empty) { control.BackColor = Color.FromArgb(180, 215, 195); control.Text = Spelling.CheckSpelling(control.Text); control.Text = control.Text.Replace("\r", "\r\n"); control.ResetBackColor(); } } } }

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >