Search Results

Search found 1798 results on 72 pages for 'incoming'.

Page 58/72 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Managing Scripts in Oracle SQL Developer

    - by thatjeffsmith
    You backup your databases, right? You backup you home computer – your media collection, tax documents, bank accounts, etc, right? You backup your handy-dandy SQL scripts, right? Ok, now that I’ve got your head nodding, I want to answer a question I get every so often: How can I manage my scripts in SQL Developer? This is an interesting question. First, it assumes that one SHOULD manage their scripts in their IDE. Now, what I think the question generally gets around to is, how can we: Navigate to our scripts Open them Execute them What a good IDE should have is an interface to your existing Version Control System (VCS.) SQL Developer supports out-of-the-box both Subversion and Git. You can also download an extension via check-for-updates to get support for CVS. Now, what I’m about to show you COULD be done without versioning and controlling your scripts – but I want to ask you why you wouldn’t want to do this? So, I’m going to proceed and assume that you do INDEED version your scripts already. Seeing what scripts you’ve already got in your repository This is very straightforward – just open the Team Versions panel. Then connect to your repository. Shows you the files in your source control system. Now, I could ‘preview’ said file right away. If I open the file from here, we get a temp file copy down from the server to the local machine. This is a local temp copy of the controlled script – I can read/execute, but not write to it. And that might be all you need. But, if your script calls other scripts, then you’re going to want to check out the server copy of your stuff down your local SVN working copy directory. That way when your script calls another script – you’re executing the PRODUCTION APPROVED copies of said scripts. And if you do SPOOL or other file I/O stuff, it will work as expected. To get to those said client copies of your scripts… Enter the Files Panel The Files panel is accessible from the View menu. You can get to your files, one of two ways. If you’ve touched the file recently, you can see it under the Recent tree. Otherwise, you can navigate to your local ‘checked out’ copies of your script(s). Open your local copies, see what’s changed, etc. And I can access the change history and see what’s been touched… What changes am I going to ‘push out’ if I commit this back to the server? Most of us work on teams, yes? This panel also gives me a heads up if someone else is making changes to the same file. I can see the ‘incoming’ changes as well. To Sum It Up… If I want to get a script to run: do a full get to your local directory open the script(s) The files panel will tell you if your local copy is out of date from the server and if you have made local changes you’ve forgotten to commit back up to the server and your fellow teammates. Now, if you’re the selfish type and don’t want to share, that’s fine. But you should still be backing up your scripts, and you can still use the Files panel to manage your scripts.

    Read the article

  • Thoughts on C# Extension Methods

    - by Damon
    I'm not a huge fan of extension methods.  When they first came out, I remember seeing a method on an object that was fairly useful, but when I went to use it another piece of code that method wasn't available.  Turns out it was an extension method and I hadn't included the appropriate assembly and imports statement in my code to use it.  I remember being a bit confused at first about how the heck that could happen (hey, extension methods were new, cut me some slack) and it took a bit of time to track down exactly what it was that I needed to include to get that method back.  I just imagined a new developer trying to figure out why a method was missing and fruitlessly searching on MSDN for a method that didn't exist and it just didn't sit well with me. I am of the opinion that if you have an object, then you shouldn't have to include additional assemblies to get additional instance level methods out of that object.  That opinion applies to namespaces as well - I do not like it when the contents of a namespace are split out into multiple assemblies.  I prefer to have static utility classes instead of extension methods to keep things nicely packaged into a cohesive unit.  It also makes it abundantly clear where utility methods are used in code.  I will concede, however, that it can make code a bit more verbose and lengthy.  There is always a trade-off. Some people harp on extension methods because it breaks the tenants of object oriented development and allows you to add methods to sealed classes.  Whatever.  Extension methods are just utility methods that you can tack onto an object after the fact.  Extension methods do not give you any more access to an object than the developer of that object allows, so I say that those who cry OO foul on extension methods really don't have much of an argument on which to stand.  In fact, I have to concede that my dislike of them is really more about style than anything of great substance. One interesting thing that I found regarding extension methods is that you can call them on null objects. Take a look at this extension method: namespace ExtensionMethods {   public static class StringUtility   {     public static int WordCount(this string str)     {       if(str == null) return 0;       return str.Split(new char[] { ' ', '.', '?' },         StringSplitOptions.RemoveEmptyEntries).Length;     }   }   } Notice that the extension method checks to see if the incoming string parameter is null.  I was worried that the runtime would perform a check on the object instance to make sure it was not null before calling an extension method, but that is apparently not the case.  So, if you call the following code it runs just fine. string s = null; int words = s.WordCount(); I am a big fan of things working, but this seems to go against everything I've come to know about instance level methods.  However, an extension method is really a static method masquerading as an instance-level method, so I suppose it would be far more frustrating if it failed since there is really no reason it shouldn't succeed. Although I'm not a fan of extension methods, I will say that if you ever find yourself at an impasse with a die-hard fan of either the utility class or extension method approach, then there is a common ground.  Extension methods are defined in static classes, and you call them from those static classes as well as directly from the objects they extend.  So if you build your utility classes using extension methods, then you can have it your way and they can have it theirs. 

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • ffmpeg unmet dependencies

    - by Nikki
    I faced an issue recently when tried to install ffmpeg on my Ubuntu computer. I am running Ubuntu 11.10 64 bit, all latest updates are installed and system runs perfectly, however i feel need in recording my desktop and have read many articles that ffmpeg is one of the best recording tools for it (besides providing packages for video) So I tried to run sudo apt-get install ffmpeg However i wasn't able to do this because packages have unmet dependencies. Here is a full text I receive after trying to install package above. Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ffmpeg : Depends: libavcodec53 (< 4:0.7.3-99) but it is not going to be installed or libavcodec-extra-53 (< 4:0.7.3.99) but 4:0.8.0.1~ppa2 is to be installed Depends: libavdevice53 (>= 4:0.7.3-0ubuntu0.11.10.1) but it is not going to be installed or libavdevice-extra-53 (>= 4:0.7.3) but it is not going to be installed Depends: libavdevice53 (< 4:0.7.3-99) but it is not going to be installed or libavdevice-extra-53 (< 4:0.7.3.99) but it is not going to be installed Depends: libavfilter2 (>= 4:0.7.3-0ubuntu0.11.10.1) but it is not going to be installed or libavfilter-extra-2 (>= 4:0.7.3) but it is not going to be installed Depends: libavfilter2 (< 4:0.7.3-99) but it is not going to be installed or libavfilter-extra-2 (< 4:0.7.3.99) but it is not going to be installed Depends: libavformat53 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libavformat-extra-53 (< 4:0.7.3.99) but it is not going to be installed Depends: libavutil51 (< 4:0.7.3-99) but it is not going to be installed or libavutil-extra-51 (< 4:0.7.3.99) but 4:0.8.0.1~ppa2 is to be installed Depends: libpostproc52 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libpostproc-extra-52 (< 4:0.7.3.99) but it is not going to be installed Depends: libswscale2 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libswscale-extra-2 (< 4:0.7.3.99) but it is not going to be installed E: Unable to correct problems, you have held broken packages. This problem hasn't existed on my previous laptop which runs the same Ubuntu 11.10 64 bit as my new one. Can anyone please help me find a solution without "messing and braking" the whole system? Thank you for helping in advance.

    Read the article

  • Dropped impression 25 days after restructure

    - by Hamid
    Our website is a non English property related website (moshaver.com) which is similar to rightmove.co.uk. On September 2012 our website was adversely affected by Panda causing our Google incoming clicks to drop from around 3000 clicks to less than a thousand. We were hoping that Google will eventually realize that we are not a spam website and things will get better. However, in August 2013 we were almost sure that we needed to do something, so we started to restructure our web content. We used the canonical tag to remove our search results and point to our listing pages, using the noindex tag to remove it from our listing pages which does not have any properties at the moment. We also changed title tags to more friendly ones, in addition to other changes. Our changes were effective on 10th August. As shown in the graph taken from Google Analytics Search Engine Optimization section, these changes has resulted in an increase in the number of times Google displayed our results in its search results. Our impressions almost doubled starting 15th August. However, as the graph shows, our CTR dropped from this date from around 15% to 8%. This might have been because of our changed title tags (so people were less likely to click on them), or it might be normal for increased impressions. This situation has continued up until 10th September, when our impressions decreased dramatically to less than a thousand. This is almost 30% of our original impressions (before website restructure) and 15% of the new impressions. At the same time our impressions has increased dramatically to around 50%. I have two theories for this increase. The first one is that these statistics are less accurate for lower impressions. The second one is that Google is now only displaying our results for queries directly related to our website (our name, our url), and not for general terms, such as "apartments in a specific city". The second theory also explains the dramatic decrease in impression as well. After digging the analytic data a little more, I constructed the following table. It displays the breakdown of our impressions, clicks and ctr in different Google products (web and image) and in total. What I understand from this table is that, most of our increased impressions after restructure were on the image search section. I don't think users of search would be looking for content in our website. Furthermore, it shows that the drop in our web search ctr, is as dramatic of the overall ctr (-30% in compare to -60%) . I thought posting it here might help you understand the situation better. Is it possible that Google has tested our new structure for 25 days, and then decided to decrease our impressions because of the the new low CTR? Or should we look for another factor? If this is the case, how long does it usually take for Google to give us another chance? It has been one month since our impressions has dropped.

    Read the article

  • Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the analytics in Big Data Story. In this article we will understand how to become a Data Scientist for Big Data Story. Data Scientist is a new buzz word, everyone seems to be wanting to become Data Scientist. Let us go over a few key topics related to Data Scientist in this blog post. First of all we will understand what is a Data Scientist. In the new world of Big Data, I see pretty much everyone wants to become Data Scientist and there are lots of people I have already met who claims that they are Data Scientist. When I ask what is their role, I have got a wide variety of answers. What is Data Scientist? Data scientists are the experts who understand various aspects of the business and know how to strategies data to achieve the business goals. They should have a solid foundation of various data algorithms, modeling and statistics methodology. What do Data Scientists do? Data scientists understand the data very well. They just go beyond the regular data algorithms and builds interesting trends from available data. They innovate and resurrect the entire new meaning from the existing data. They are artists in disguise of computer analyst. They look at the data traditionally as well as explore various new ways to look at the data. Data Scientists do not wait to build their solutions from existing data. They think creatively, they think before the data has entered into the system. Data Scientists are visionary experts who understands the business needs and plan ahead of the time, this tremendously help to build solutions at rapid speed. Besides being data expert, the major quality of Data Scientists is “curiosity”. They always wonder about what more they can get from their existing data and how to get maximum out of future incoming data. Data Scientists do wonders with the data, which goes beyond the job descriptions of Data Analysist or Business Analysist. Skills Required for Data Scientists Here are few of the skills a Data Scientist must have. Expert level skills with statistical tools like SAS, Excel, R etc. Understanding Mathematical Models Hands-on with Visualization Tools like Tableau, PowerPivots, D3. j’s etc. Analytical skills to understand business needs Communication skills On the technology front any Data Scientists should know underlying technologies like (Hadoop, Cloudera) as well as their entire ecosystem (programming language, analysis and visualization tools etc.) . Remember that for becoming a successful Data Scientist one require have par excellent skills, just having a degree in a relevant education field will not suffice. Final Note Data Scientists is indeed very exciting job profile. As per research there are not enough Data Scientists in the world to handle the current data explosion. In near future Data is going to expand exponentially, and the need of the Data Scientists will increase along with it. It is indeed the job one should focus if you like data and science of statistics. Courtesy: emc Tomorrow In tomorrow’s blog post we will discuss about various Big Data Learning resources. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Help finding missing mumble-server dependencies

    - by Otoris
    I'm trying to install the mumble-server package using apt-get install mumble-server on Ubuntu 11.10 Server Edition on Rackspace Cloud. Problem is it can't find dependencies it should have found because they exist on launchpad.net? Dependencies message: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mumble-server : Depends: libavahi-compat-libdnssd1 (>= 0.6.16) but it is not installable Depends: libprotobuf7 but it is not installable Depends: libqt4-dbus (>= 4:4.5.3) but it is not installable Depends: libqt4-network (>= 4:4.5.3) but it is not installable Depends: libqt4-sql (>= 4:4.5.3) but it is not installable Depends: libqt4-xml (>= 4:4.5.3) but it is not installable Depends: libqtcore4 (>= 4:4.7.0~beta1) but it is not installable Depends: libqt4-sql-sqlite but it is not installable E: Unable to correct problems, you have held broken packages. Any ideas on if I might be missing sources? I've been googling around and haven't found anyone else in this situation or anyone else not able to install the aforementioned packages. Thanks for your time! sources.list: deb http://mirror.rackspace.com/ubuntu/ oneiric restricted deb-src http://mirror.rackspace.com/ubuntu/ oneiric restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://mirror.rackspace.com/ubuntu/ oneiric-updates restricted deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://mirror.rackspace.com/ubuntu/ oneiric universe deb-src http://mirror.rackspace.com/ubuntu/ oneiric universe deb http://mirror.rackspace.com/ubuntu/ oneiric-updates universe deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://mirror.rackspace.com/ubuntu/ oneiric multiverse deb-src http://mirror.rackspace.com/ubuntu/ oneiric multiverse deb http://mirror.rackspace.com/ubuntu/ oneiric-updates multiverse deb-src http://mirror.rackspace.com/ubuntu/ oneiric-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://us.archive.ubuntu.com/ubuntu/ oneiric-backports restricted universe multiverse # deb-src http://us.archive.ubuntu.com/ubuntu/ oneiric-backports restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. This software is not part of Ubuntu, but is ## offered by Canonical and the respective vendors as a service to Ubuntu ## users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner deb http://security.ubuntu.com/ubuntu oneiric-security restricted deb-src http://security.ubuntu.com/ubuntu oneiric-security restricted deb http://security.ubuntu.com/ubuntu oneiric-security universe deb-src http://security.ubuntu.com/ubuntu oneiric-security universe deb http://security.ubuntu.com/ubuntu oneiric-security multiverse deb-src http://security.ubuntu.com/ubuntu oneiric-security multiverse # Cool Kid Webmin/Usermin Here Brah deb http://download.webmin.com/download/repository sarge contrib deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib

    Read the article

  • Green (Screen) Computing

    - by onefloridacoder
    I recently was given an assignment to create a UX where a user could use the up and down arrow keys, as well as the tab and enter keys to move through a Silverlight datagrid that is going be used as part of a high throughput data entry UI. And to be honest, I’ve not trapped key codes since I coded JavaScript a few years ago.  Although the frameworks I’m using made it easy, it wasn’t without some trial and error.    The other thing that bothered me was that the customer tossed this into the use case as they were delivering the use case.  Fine.  I’ll take a whack at anything and beat up myself and beg (I’m not beyond begging for help) the community for help to get something done if I have to. It wasn’t as bad as I thought and I thought I would hopefully save someone a few keystrokes if you wanted to build a green screen for your customer.   Here’s the ValueConverter to handle changing the strings to decimals and then back again.  The value is a nullable valuetype so there are few extra steps to take.  Usually the “ConvertBack()” method doesn’t get addressed but in this case we have two-way binding and the converter needs to ensure that if the user doesn’t enter a value it will remain null when the value is reapplied to the model object’s setter.  1: using System; 2: using System.Windows.Data; 3: using System.Globalization; 4:  5: public class NullableDecimalToStringConverter : IValueConverter 6: { 7: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 8: { 9: if (!(((decimal?)value).HasValue)) 10: { 11: return (decimal?)null; 12: } 13: if (!(value is decimal)) 14: { 15: throw new ArgumentException("The value must be of type decimal"); 16: } 17:  18: NumberFormatInfo nfi = culture.NumberFormat; 19: nfi.NumberDecimalDigits = 4; 20:  21: return ((decimal)value).ToString("N", nfi); 22: } 23:  24: public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 25: { 26: decimal nullableDecimal; 27: decimal.TryParse(value.ToString(), out nullableDecimal); 28:  29: return nullableDecimal == 0 ? null : nullableDecimal.ToString(); 30: } 31: }            The ConvertBack() method uses TryParse to create a value from the incoming string so if the parse fails, we get a null value back, which is what we would expect.  But while I was testing I realized that if the user types something like “2..4” instead of “2.4”, TryParse will fail and still return a null.  The user is getting “puuu-lenty” of eye-candy to ensure they know how many values are affected in this particular view. Here’s the XAML code.   This is the simple part, we just have a DataGrid with one column here that’s bound to the the appropriate ViewModel property with the Converter referenced as well. 1: <data:DataGridTextColumn 2: Header="On-Hand" 3: Binding="{Binding Quantity, 4: Mode=TwoWay, 5: Converter={StaticResource DecimalToStringConverter}}" 6: IsReadOnly="False" /> Nothing too magical here.  Just some XAML to hook things up.   Here’s the code behind that’s handling the DataGridKeyup event.  These are wired to a local/private method but could be converted to something the ViewModel could use, but I just need to get this working for now. 1: // Wire up happens in the constructor 2: this.PicDataGrid.KeyUp += (s, e) => this.HandleKeyUp(e);   1: // DataGrid.BeginEdit fires when DataGrid.KeyUp fires. 2: private void HandleKeyUp(KeyEventArgs args) 3: { 4: if (args.Key == Key.Down || 5: args.Key == Key.Up || 6: args.Key == Key.Tab || 7: args.Key == Key.Enter ) 8: { 9: this.PicDataGrid.BeginEdit(); 10: } 11: }   And that’s it.  The ValueConverter was the biggest problem starting out because I was using an existing converter that didn’t take nullable value types into account.   Once the converter was passing back the appropriate value (null, “#.####”) the grid cell(s) and the model objects started working as I needed them to. HTH.

    Read the article

  • XNA Multiplayer Games and Networking

    - by JoshReuben
    ·        XNA communication must by default be lightweight – if you are syncing game state between players from the Game.Update method, you must minimize traffic. That game loop may be firing 60 times a second and player 5 needs to know if his tank has collided with any player 3 and the angle of that gun turret. There are no WCF ServiceContract / DataContract niceties here, but at the same time the XNA networking stack simplifies the details. The payload must be simplistic - just an ordered set of numbers that you would map to meaningful enum values upon deserialization.   Overview ·        XNA allows you to create and join multiplayer game sessions, to manage game state across clients, and to interact with the friends list ·        Dependency on Gamer Services - to receive notifications such as sign-in status changes and game invitations ·        two types of online multiplayer games: system link game sessions (LAN) and LIVE sessions (WAN). ·        Minimum dev requirements: 1 Xbox 360 console + Creators Club membership to test network code - run 1 instance of game on Xbox 360, and 1 on a Windows-based computer   Network Sessions ·        A network session is made up of players in a game + up to 8 arbitrary integer properties describing the session ·        create custom enums – (e.g. GameMode, SkillLevel) as keys in NetworkSessionProperties collection ·        Player state: lobby, in-play   Session Types ·        local session - for split-screen gaming - requires no network traffic. ·        system link session - connects multiple gaming machines over a local subnet. ·        Xbox LIVE multiplayer session - occurs on the Internet. Ranked or unranked   Session Updates ·        NetworkSession class Update method - must be called once per frame. ·        performs the following actions: o   Sends the network packets. o   Changes the session state. o   Raises the managed events for any significant state changes. o   Returns the incoming packet data. ·        synchronize the session à packet-received and state-change events à no threading issues   Session Config ·        Session host - gaming machine that creates the session. XNA handles host migration ·        NetworkSession properties: AllowJoinInProgress , AllowHostMigration ·        NetworkSession groups: AllGamers, LocalGamers, RemoteGamers   Subscribe to NetworkSession events ·        GamerJoined ·        GamerLeft ·        GameStarted ·        GameEnded – use to return to lobby ·        SessionEnded – use to return to title screen   Create a Session session = NetworkSession.Create(         NetworkSessionType.SystemLink,         maximumLocalPlayers,         maximumGamers,         privateGamerSlots,         sessionProperties );   Start a Session if (session.IsHost) {     if (session.IsEveryoneReady)     {        session.StartGame();        foreach (var gamer in SignedInGamer.SignedInGamers)        {             gamer.Presence.PresenceMode =                 GamerPresenceMode.InCombat;   Find a Network Session AvailableNetworkSessionCollection availableSessions = NetworkSession.Find(     NetworkSessionType.SystemLink,       maximumLocalPlayers,     networkSessionProperties); availableSessions.AllowJoinInProgress = true;   Join a Network Session NetworkSession session = NetworkSession.Join(     availableSessions[selectedSessionIndex]);   Sending Network Data var packetWriter = new PacketWriter(); foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Get the tank associated with this player.     Tank myTank = gamer.Tag as Tank;     // Write the data.     packetWriter.Write(myTank.Position);     packetWriter.Write(myTank.TankRotation);     packetWriter.Write(myTank.TurretRotation);     packetWriter.Write(myTank.IsFiring);     packetWriter.Write(myTank.Health);       // Send it to everyone.     gamer.SendData(packetWriter, SendDataOptions.None);     }   Receiving Network Data foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Keep reading while packets are available.     while (gamer.IsDataAvailable)     {         NetworkGamer sender;          // Read a single packet.         gamer.ReceiveData(packetReader, out sender);          if (!sender.IsLocal)         {             // Get the tank associated with this packet.             Tank remoteTank = sender.Tag as Tank;              // Read the data and apply it to the tank.             remoteTank.Position = packetReader.ReadVector2();             …   End a Session if (session.AllGamers.Count == 1)         {             session.EndGame();             session.Update();         }   Performance •        Aim to minimize payload, reliable in order messages •        Send Data Options: o   Unreliable, out of order -(SendDataOptions.None) o   Unreliable, in order (SendDataOptions.InOrder) o   Reliable, out of order (SendDataOptions.Reliable) o   Reliable, in order (SendDataOptions.ReliableInOrder) o   Chat data (SendDataOptions.Chat) •        Simulate: NetworkSession.SimulatedLatency , NetworkSession.SimulatedPacketLoss •        Voice support – NetworkGamer properties: HasVoice ,IsTalking , IsMutedByLocalUser

    Read the article

  • Identity in .NET 4.5&ndash;Part 2: Claims Transformation in ASP.NET (Beta 1)

    - by Your DisplayName here!
    In my last post I described how every identity in .NET 4.5 is now claims-based. If you are coming from WIF you might think, great – how do I transform those claims? Sidebar: What is claims transformation? One of the most essential features of WIF (and .NET 4.5) is the ability to transform credentials (or tokens) to claims. During that process the “low level” token details are turned into claims. An example would be a Windows token – it contains things like the name of the user and to which groups he belongs to. That information will be surfaced as claims of type Name and GroupSid. Forms users will be represented as a Name claim (all the other claims that WIF provided for FormsIdentity are gone in 4.5). The issue here is, that your applications typically don’t care about those low level details, but rather about “what’s the purchase limit of alice”. The process of turning the low level claims into application specific ones is called claims transformation. In pre-claims times this would have been done by a combination of Forms Authentication extensibility, role manager and maybe ASP.NET profile. With claims transformation all your identity gathering code is in one place (and the outcome can be cached in a single place as opposed to multiple ones). The structural class to do claims transformation is called ClaimsAuthenticationManager. This class has two purposes – first looking at the incoming (low level) principal and making sure all required information about the user is present. This is your first chance to reject a request. And second – modeling identity information in a way it is relevant for the application (see also here). This class gets called (when present) during the pipeline when using WS-Federation. But not when using the standard .NET principals. I am not sure why – maybe because it is beta 1. Anyhow, a number of people asked me about it, and the following is a little HTTP module that brings that feature back in 4.5. public class ClaimsTransformationHttpModule : IHttpModule {     public void Dispose()     { }     public void Init(HttpApplication context)     {         context.PostAuthenticateRequest += Context_PostAuthenticateRequest;     }     void Context_PostAuthenticateRequest(object sender, EventArgs e)     {         var context = ((HttpApplication)sender).Context;         // no need to call transformation if session already exists         if (FederatedAuthentication.SessionAuthenticationModule != null &&             FederatedAuthentication.SessionAuthenticationModule.ContainsSessionTokenCookie(context.Request.Cookies))         {             return;         }         var transformer = FederatedAuthentication.FederationConfiguration.IdentityConfiguration.ClaimsAuthenticationManager;         if (transformer != null)         {             var transformedPrincipal = transformer.Authenticate(context.Request.RawUrl, context.User as ClaimsPrincipal);             context.User = transformedPrincipal;             Thread.CurrentPrincipal = transformedPrincipal;         }     } } HTH

    Read the article

  • WIF-less claim extraction from ACS: JWT

    - by Elton Stoneman
    ACS support for JWT still shows as "beta", but it meets the spec and it works nicely, so it's becoming the preferred option as SWT is losing favour. (Note that currently ACS doesn’t support JWT encryption, if you want encrypted tokens you need to go SAML). In my last post I covered pulling claims from an ACS token without WIF, using the SWT format. The JWT format is a little more complex, but you can still inspect claims just with string manipulation. The incoming token from ACS is still presented in the BinarySecurityToken element of the XML payload, with a TokenType of urn:ietf:params:oauth:token-type:jwt: <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:39:55.337Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:19:55.337Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="_1eeb5cf4-b40b-40f2-89e0-a3343f6bd985-6A15D1EED0CDB0D8FA48C7D566232154" ValueType="urn:ietf:params:oauth:token-type:jwt" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">[ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>urn:ietf:params:oauth:token-type:jwt</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> The token as a whole needs to be base-64 decoded. The decoded value contains a header, payload and signature, dot-separated; the parts are also base-64, but they need to be decoded using a no-padding algorithm (implementation and more details in this MSDN article on validating an Exchange 2013 identity token). The values are then in JSON; the header contains the token type and the hashing algorithm: "{"typ":"JWT","alg":"HS256"}" The payload contains the same data as in the SWT, but JSON rather than querystring format: {"aud":"http://localhost/x.y.z" "iss":"https://adfstest-bhw.accesscontrol.windows.net/" "nbf":1346398795 "exp":1346404795 "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant":"2012-08-31T07:39:53.652Z" "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows" "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname":"xyz" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":"[email protected]" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"[email protected]" "identityprovider":"http://fs.svc.x.y.z.com/adfs/services/trust"} The signature is in the third part of the token. Unlike SWT which is fixed to HMAC-SHA-256, JWT can support other protocols (the one in use is specified as the "alg" value in the header). How to: Validate an Exchange 2013 identity token contains an implementation of a JWT parser and validator; apart from the custom base-64 decoding part, it’s very similar to SWT extraction. I've wrapped the basic SWT and JWT in a ClaimInspector.aspx page on gitHub here: SWT and JWT claim inspector. You can drop it into any ASP.Net site and set the URL to be your redirect page in ACS. Swap ACS to issue SWT or JWT, and using the same page you can inspect the claims that come out.

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

  • Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)

    - by arungupta
    Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. In a typical application, ServletInputStream is read in a while loop. public class TestServlet extends HttpServlet {    protected void doGet(HttpServletRequest request, HttpServletResponse response)         throws IOException, ServletException {     ServletInputStream input = request.getInputStream();       byte[] b = new byte[1024];       int len = -1;       while ((len = input.read(b)) != -1) {          . . .        }   }} If the incoming data is blocking or streamed slower than the server can read then the server thread is waiting for that data. The same can happen if the data is written to ServletOutputStream. This is resolved in Servet 3.1 (JSR 340, to be released as part Java EE 7) by adding event listeners - ReadListener and WriteListener interfaces. These are then registered using ServletInputStream.setReadListener and ServletOutputStream.setWriteListener. The listeners have callback methods that are invoked when the content is available to be read or can be written without blocking. The updated doGet in our case will look like: AsyncContext context = request.startAsync();ServletInputStream input = request.getInputStream();input.setReadListener(new MyReadListener(input, context)); Invoking setXXXListener methods indicate that non-blocking I/O is used instead of the traditional I/O. At most one ReadListener can be registered on ServletIntputStream and similarly at most one WriteListener can be registered on ServletOutputStream. ServletInputStream.isReady and ServletInputStream.isFinished are new methods to check the status of non-blocking I/O read. ServletOutputStream.canWrite is a new method to check if data can be written without blocking.  MyReadListener implementation looks like: @Overridepublic void onDataAvailable() { try { StringBuilder sb = new StringBuilder(); int len = -1; byte b[] = new byte[1024]; while (input.isReady() && (len = input.read(b)) != -1) { String data = new String(b, 0, len); System.out.println("--> " + data); } } catch (IOException ex) { Logger.getLogger(MyReadListener.class.getName()).log(Level.SEVERE, null, ex); }}@Overridepublic void onAllDataRead() { System.out.println("onAllDataRead"); context.complete();}@Overridepublic void onError(Throwable t) { t.printStackTrace(); context.complete();} This implementation has three callbacks: onDataAvailable callback method is called whenever data can be read without blocking onAllDataRead callback method is invoked data for the current request is completely read. onError callback is invoked if there is an error processing the request. Notice, context.complete() is called in onAllDataRead and onError to signal the completion of data read. For now, the first chunk of available data need to be read in the doGet or service method of the Servlet. Rest of the data can be read in a non-blocking way using ReadListener after that. This is going to get cleaned up where all data read can happen in ReadListener only. The sample explained above can be downloaded from here and works with GlassFish 4.0 build 64 and onwards. The slides and a complete re-run of What's new in Servlet 3.1: An Overview session at JavaOne is available here. Here are some more references for you: Java EE 7 Specification Status Servlet Specification Project JSR Expert Group Discussion Archive Servlet 3.1 Javadocs

    Read the article

  • Custom Text and Binary Payloads using WebSocket (TOTD #186)

    - by arungupta
    TOTD #185 explained how to process text and binary payloads in a WebSocket endpoint. In summary, a text payload may be received as public void receiveTextMessage(String message) {    . . . } And binary payload may be received as: public void recieveBinaryMessage(ByteBuffer message) {    . . .} As you realize, both of these methods receive the text and binary data in raw format. However you may like to receive and send the data using a POJO. This marshaling and unmarshaling can be done in the method implementation but JSR 356 API provides a cleaner way. For encoding and decoding text payload into POJO, Decoder.Text (for inbound payload) and Encoder.Text (for outbound payload) interfaces need to be implemented. A sample implementation below shows how text payload consisting of JSON structures can be encoded and decoded. public class MyMessage implements Decoder.Text<MyMessage>, Encoder.Text<MyMessage> {     private JsonObject jsonObject;    @Override    public MyMessage decode(String string) throws DecodeException {        this.jsonObject = new JsonReader(new StringReader(string)).readObject();               return this;    }     @Override    public boolean willDecode(String string) {        return true;    }     @Override    public String encode(MyMessage myMessage) throws EncodeException {        return myMessage.jsonObject.toString();    } public JsonObject getObject() { return jsonObject; }} In this implementation, the decode method decodes incoming text payload to MyMessage, the encode method encodes MyMessage for the outgoing text payload, and the willDecode method returns true or false if the message can be decoded. The encoder and decoder implementation classes need to be specified in the WebSocket endpoint as: @WebSocketEndpoint(value="/endpoint", encoders={MyMessage.class}, decoders={MyMessage.class}) public class MyEndpoint { public MyMessage receiveMessage(MyMessage message) { . . . } } Notice the updated method signature where the application is working with MyMessage instead of the raw string. Note that the encoder and decoder implementations just illustrate the point and provide no validation or exception handling. Similarly Encooder.Binary and Decoder.Binary interfaces need to be implemented for encoding and decoding binary payload. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • bulk insert and update with ADO.NET Entity Framework

    - by Keith Barrows
    I am writing a small application that does a lot of feed processing. I want to use LINQ EF for this as speed is not an issue, it is a single user app and, in the end, will only be used once a month. My questions revolves around the best way to do bulk inserts using LINQ EF. After parsing the incoming data stream I end up with a List of values. Since the end user may end up trying to import some duplicate data I would like to "clean" the data during insert rather than reading all the records, doing a for loop, rejecting records, then finally importing the remainder. This is what I am currently doing: DateTime minDate = dataTransferObject.Min(c => c.DoorOpen); DateTime maxDate = dataTransferObject.Max(c => c.DoorOpen); using (LabUseEntities myEntities = new LabUseEntities()) { var recCheck = myEntities.ImportDoorAccess.Where(a => a.DoorOpen >= minDate && a.DoorOpen <= maxDate).ToList(); if (recCheck.Count > 0) { foreach (ImportDoorAccess ida in recCheck) { DoorAudit da = dataTransferObject.Where(a => a.DoorOpen == ida.DoorOpen && a.CardNumber == ida.CardNumber).First(); if (da != null) da.DoInsert = false; } } ImportDoorAccess newIDA; foreach (DoorAudit newDoorAudit in dataTransferObject) { if (newDoorAudit.DoInsert) { newIDA = new ImportDoorAccess { CardNumber = newDoorAudit.CardNumber, Door = newDoorAudit.Door, DoorOpen = newDoorAudit.DoorOpen, Imported = newDoorAudit.Imported, RawData = newDoorAudit.RawData, UserName = newDoorAudit.UserName }; myEntities.AddToImportDoorAccess(newIDA); } } myEntities.SaveChanges(); } I am also getting this error: System.Data.UpdateException was unhandled Message="Unable to update the EntitySet 'ImportDoorAccess' because it has a DefiningQuery and no element exists in the element to support the current operation." Source="System.Data.SqlServerCe.Entity" What am I doing wrong? Any pointers are welcome.

    Read the article

  • Android - Querying the SMS ContentProvider?

    - by Donal Rafferty
    I currently register a content observer on the following URI "content://sms/" to listen out for incoming and outgoing messages being sent. This seems to work ok and I have also tried deleting from the sms database but I can only delete an entire thread from the following URI "content://sms/conversations/" Here is the code I use for that String url = "content://sms/"; Uri uri = Uri.parse(url); getContentResolver().registerContentObserver(uri, true, new MyContentObserver(handler)); } class MyContentObserver extends ContentObserver { public MyContentObserver(Handler handler) { super(handler); } @Override public boolean deliverSelfNotifications() { return false; } @Override public void onChange(boolean arg0) { super.onChange(arg0); Log.v("SMS", "Notification on SMS observer"); Message msg = new Message(); msg.obj = "xxxxxxxxxx"; handler.sendMessage(msg); Uri uriSMSURI = Uri.parse("content://sms/"); Cursor cur = getContentResolver().query(uriSMSURI, null, null, null, null); cur.moveToNext(); String protocol = cur.getString(cur.getColumnIndex("protocol")); if(protocol == null){ Log.d("SMS", "SMS SEND"); int threadId = cur.getInt(cur.getColumnIndex("thread_id")); Log.d("SMS", "SMS SEND ID = " + threadId); Cursor c = getContentResolver().query(Uri.parse("content://sms/outbox/" + threadId), null, null, null, null); c.moveToNext(); int p = cur.getInt(cur.getColumnIndex("person")); Log.d("SMS", "SMS SEND person= " + p); //getContentResolver().delete(Uri.parse("content://sms/outbox/" + threadId), null, null); } else{ Log.d("SMS", "SMS RECIEVE"); int threadIdIn = cur.getInt(cur.getColumnIndex("thread_id")); getContentResolver().delete(Uri.parse("content://sms/outbox/" + threadIdIn), null, null); } } } However I want to be able to get the recipricant and the message text from the SMS Content Provider, can anyone tell me how to do this? And also how to delete one message instead of an entire thread?

    Read the article

  • prevent outlook stationery from showing up in my email (Outlook 2007)

    - by KevinDeus
    There are some people in my office who insist on using cute stationery and some of it makes messages difficult to read. I really just want to read email on a white background with no distractions. Is there a way to disable stationery on incoming mail in Outlook? (Without switching to "plain text only") yeah, I yanked that description from here but it is very accurate however I've had no luck in finding a solution. Most solutions I see solve the problem by pushing out something to a bunch of users. like : this I don't really have the authority to do that. Not only that, that only prevents ME from setting stationery. this has been asked before to no avail: I don't have time to deal with this, so hopefully there is something I have overlooked. Without switching to "plain text only" I want to be able to change a setting on my computer (it can be. a reg hack, I don't care) that will prevent outlook stationery from showing up in my email it would also be helpful to know how to do it for Outlook 2003 as well.

    Read the article

  • Need help understanding WPF MeasureOverride and ArrangeOverride infinity and 0 sizes

    - by Scott Bilas
    I'm trying to build a simple panel that contains one child and will snap it to nearest grid sizes. I'm having a hard time figuring out how to do this by overriding MeasureOverride and ArrangeOverride. Here's what I'm after: I want my panel to tell its owner that (a) it wants to be as large as possible, then (b) when it finds out what that size is, it will size itself and its child UIElement according to the nearest smaller snap point. So if we're snapping to 10's, and I can be in a region no bigger than 192x184, the panel will tell its parent container "my actual size is going to be 190x180". That way anything bordering my control will be able to align to its edges, as opposed to the potential space. When I put my panel inside of a Grid, I get either 0 or PositiveInfinity (I forget) for the incoming size in the overrides, but what I need to know is "how big can my space actually get?" not infinities.. Part of the problem I think is what WPF considers magic values of PositiveInfinity and 0 for size. I need a way to say, via MeasureOverride "I can be as big as you will allow me" and in ArrangeOverride to actually size to the snapped size. Or am I going about this the completely wrong way? Measuring and arranging looks very complicated, just from wandering around a little in the code for the standard panels in Reflector.

    Read the article

  • send sms from background thread in blackberry using j2me

    - by SWATI
    hey i made a lot of search and found some similar types of code. I tried for gsm method 1 gives IllegalArgumentException try { MessageConnection _mc = (MessageConnection)Connector.open("sms://"); TextMessage tm = (TextMessage) _mc.newMessage(MessageConnection.TEXT_MESSAGE); tm.setPayloadText(smsText); tm.setAddress("965xxxxxxx"); _mc.send(tm); _mc.close(); }catch(exception e){} method 2: gives java.lang.error exception try { MessageConnection _mc = (MessageConnection)Connector.open("sms://"); TextMessage tm = (TextMessage) _mc.newMessage(MessageConnection.TEXT_MESSAGE, "//9790XXXXXX"); tm.setPayloadText(text); _mc.send(tm); _mc.close(); }catch(Exception e){} I think the problem is with address i also tried : but no success +91965xxxxxxx , 0091965xxxxxxx , 0965xxxxxxx How my application works---- i have created 2 applications-- 1) Application 1 is a background app that is a System module as well as startup application. 2) Another is a uiapplication the background app runs in background.If there comes an incoming call then a flag value is set in persistent object and after checking that value as true the sms is send to that no from whom call is made.

    Read the article

  • Biztalk Server 2009 - Failover Clustering and Network Load Balancing (NLB)

    - by FullOfQuestions
    Hi, We are planning a Biztalk 2009 set up in which we have 2 Biztalk Application Servers and 2 DB Servers (DB servers being in an Active/Passive Cluster). All servers are running Windows Server 2008 R2. As part of our application, we will have incoming traffic via the MSMQ, FILE and SOAP adapters. We also have a requirement for High-availability and Load-balancing. Let's say I create two different Biztalk Hosts and assign the FILE receive handler to the first one and the MSMQ receive handler to the second one. I now create two host instances for each of the two hosts (i.e. one for each of my two physical servers). After reviewing the Biztalk Documentation, this is what I know so far: For FILE (Receive), high-availablity and load-balancing will be achieved by Biztalk automatically because I set up a host instance on each of the two servers in the group. MSMQ (Receive) requires Biztalk Host Clustering to ensure high-availability (Host Clustering however requires Windows Failover Clustering to be set up as well). No loading-balancing option is clear here. SOAP (Receive) requires NLB to achieve Load-balancing and High-availability (if one server goes down, NLB will direct traffic to the other). This is where I'm completely puzzled and I desperately need your help: Is it possible to have a Windows Failover Cluster and NLB set up at the same time on the two application servers? If yes, then please tell me how. If no, then please explain to me how anyone is acheiving high-availability and load-balancing for MSMQ and SOAP when their underlying prerequisites are mutually exclusive! Your help is greatly appreciated, M

    Read the article

  • Rewind request body stream

    - by Despertar
    I am re-implementing a request logger as Owin Middleware which logs the request url and body of all incoming requests. I am able to read the body, but if I do the body parameter in my controller is null. I'm guessing it's null because the stream position is at the end so there is nothing left to read when it tries to deserialize the body. I had a similar issue in a previous version of Web API but was able to set the Stream position back to 0. This particular stream throws a This stream does not support seek operations exception. In the most recent version of Web API 2.0 I could call Request.HttpContent.ReadAsStringAsync()inside my request logger, and the body would still arrive to the controller in tact. How can I rewind the stream after reading it? or How can I read the request body without consuming it? public class RequestLoggerMiddleware : OwinMiddleware { public RequestLoggerMiddleware(OwinMiddleware next) : base(next) { } public override Task Invoke(IOwinContext context) { return Task.Run(() => { string body = new StreamReader(context.Request.Body).ReadToEnd(); // log body context.Request.Body.Position = 0; // cannot set stream position back to 0 Console.WriteLine(context.Request.Body.CanSeek); // prints false this.Next.Invoke(context); }); } } public class SampleController : ApiController { public void Post(ModelClass body) { // body is now null if the middleware reads it } }

    Read the article

  • why DKIM bad signature

    - by Ashish
    Hi, I have setup a postfix mail receiving server. On this I am using DKIM milter to verify incoming mail DKIM signatures. From some email clients I am getting the following 'bad signature data' error: Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) no signing domain match for `gmail.com' Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) no signing subdomain match for `gmail.com' Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) no signing keylist match for `[email protected]' Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) not internal Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) not authenticated Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: (unknown-jobid) mode select: verifying Jun 7 02:10:09 ip-10-194-99-63 dkim-filter[27964]: BA6E210015D: bad signature data Jun 7 02:10:10 ip-10-194-99-63 postfix/cleanup[30131]: BA6E210015D: milter-reject: END-OF-MESSAGE from mail-pv0-f176.google.com[74.125.83.176]: 5.7.0 bad DKIM signature data; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<mail-pv0-f176.google.com> can anybody give me any clue why I am getting the above error in my postfix logs and what remedies I can put in to rectify or workaround this or warn the sender. Thanks in advance Ashish Sharma

    Read the article

  • The remote server returned an unexpected response: (413) Request Entity Too Large

    - by user1583591
    If anyone can help me figure out why I am getting the following error when making a call to my WCF service I would be eternally grateful. The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. I have tried modifying the config file on both the service and client, and made sure the service name includes the namespace. I cannt seem to make any progress. Here is my service config settings: <services> <service name="CCC.CA-CP &amp; Sightlines Campus Carbon Calculator"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="Binding2" contract="CCC.ICCCService" behaviorConfiguration="WebBehavior2" /> </service> </services> <bindings> <basicHttpBinding> <binding name="Binding2" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="52428800" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="16384" maxBytesPerRead="20000" maxNameTableCharCount="16384" ></readerQuotas> </binding> </basicHttpBinding> </bindings> .. <dataContractSerializer maxItemsInObjectGraph="12097151" /> ... <requestLimits maxAllowedContentLength="157286400" /> ... <httpRuntime useFullyQualifiedRedirectUrl="true" maxRequestLength="2147483647"... I also set the client config with the same binding values. Here is the service contract : namespace CCC { [ServiceContract(Name = "CA-CP & Sightlines Campus Carbon Calculator", Namespace = "http://www.sightlines.com/CCC/01")] public interface ICCCService { .... } Thanks in advance for any help given!

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >