Search Results

Search found 26454 results on 1059 pages for 'post parameter'.

Page 261/1059 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • How to build Visual Studio Setup projects (.vdproj) with TFS 2010 Build ?

    - by Vishal
    Out of the box, Team Foundation Server 2010 Build does not support building of setup projects (.vdproj). Although, you can modify DefaultTemplate.xaml or create your own in order to achieve this. I had to try bunch of different blog post's and finally got it working with a mixture of all those posts.   Since you don’t have to go through this pain again, I have uploaded the Template which you can use right away : https://skydrive.live.com/redir?resid=65B2671F6B93CDE9!310 Download and CheckIn this template into your source control. Modify your Build Definition to use this template. Unless you have CheckedIn the template, it wont show up in the template selection section in the process task of build definition. In your Visual Studio Solution Configuration Manager, make sure you specify to build the setup project also. You might get this warning in your build result: “The project file “*.vdproj” is not support by MSBuild and cannot be build. Hope it helps. Thanks, Vishal Mody Reference blog posts I had used: http://geekswithblogs.net/jakob/archive/2010/05/14/building-visual-studio-setup-projects-with-tfs-2010-team-build.aspx http://donovanbrown.com/post/I-need-to-build-a-project-that-is-not-supported-by-MSBuild.aspx http://lajak.wordpress.com/2011/02/19/build-biztalk-deployment-framework-projects-using-tfs2010/

    Read the article

  • Another Exchange 2003 to Exchange 2010 mail flow issue

    - by Ryan Roussel
    During a migration recently, we came across another internal mail routing issue.  The symptoms were identical to my previous post about Exchange internal mail routing.  Mail was flowing from 2010 to 2003, from 2010 to the internet, but not from 2003 to 2010.   I went through the normal check list looking at permissions, DNS, and the routing group connectors.  I verified that both servers listed in the routing group connectors were the routing master in their respective routing groups through the 2003 ESM.  I also verified that inheritable permissions were enabled for the Exchange 2003 server object in the schema.  No luck with either.   For my previous post about this issue in which inheritable permissions were the culprit: Exchange 2010, Exchange 2003 Mail Flow issue   And for Routing Group issues: Exchange 2007 Routing Group Connector Mayhem   I finally enabled logging on the SMTP virtual server on Exchange 2003 and the Default Receive Connector on 2010 and sent a few test e-mails where I found 2003 was having issues authenticating to 2010.  By default 2003 uses Exchange Server Authentication to communicate to 2010. The exact error was: 4.7.0 Temporary Authentication Failure which was found in the SMTP logs on the Exchange 2003 side   After scouring based on this error, I found the solution:   The Access this computer from the network user rights in the local computer policy on the Exchange 2010 server were changed from the default.  The network administrator had modified the Default Domain policy and changed this user right assignment to only list Domain Users.   The fix was to clear this setting in the Default Domain policy,  force gpupdate to refresh the group policy settings, then ensure the appropriate users and groups were listed.   This immediately fixed the problem and the Exchange 2003 server was able to route mail to the Exchange 2010 mailboxes.   The default user rights assignments for Access this computer from the network On Workstations and Servers: Administrators Backup Operators Power Users Users Everyone On Domain Controllers: Administrators Authenticated Users Everyone More can be found here: http://technet.microsoft.com/en-us/library/cc740196(WS.10).aspx

    Read the article

  • Cmd+Less Than (10.8.2) not working after Xcode (4.5.x) installed

    - by Felix Lieb
    I had to reinstall my MBP recently. I stress Cmd+Less Than a lot for switching between Xcode's main window and the Organizer for documentation. The standard OSX-shortcut for doing that is Cmd + Less Than. After installing Xcode it didn't work any longer. I saw, that Xcode uses Cmd+LT for "Edit Schemes", a rarely used option. Even after deleting the shortcut for "Edit Schemes" in Xcode, Cmd+LT didn't work. How can I get Cmd + Less Than to work again? Mac OS X Mount Lion 10.8.2 Xcode 4.5.2 I have less than 10 reputation on superuser (acutally first post here), so I can't post the answer to my question. Would yo be so kind and upvote this question, so I can officially answer the question? The question, as well as the answer is only correct, if you use German keyboard layout.

    Read the article

  • How to make outlook.com/Office 365 use plain text and/or conventional quotes?

    - by user23122
    I am forced to use the web version of Outlook at Office 365. I really dislike how it formats e-mail as well as how it handles quoting when replying. In the desktop version of Outlook you can at least force it too display the messages in plain text and then you can manually bottom post/post interleaved. Plain text also changes how it handles quotes: " " is used rather than some braindead RTF version of format=flowed (I like format=flowed when it is implemented without bugs) although the attribution line is completely useless but in the online version I can't find a way to achieve even this. Any ideas? I guess a Greasemonkey script could do this?

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Trying to create a git repo that does an automatic checkout everytime someone updates origin

    - by Dane Larsen
    Basically, I have a server with a git repo 'origin'. I'm trying to have another repo auto-pull from origin every time someone pushes code to it. I've been using the hooks in origin, specifically post-receive. So far, my post receive looks something like this: #!/bin/sh GIT_DIR=/home/<user>/<test_repo> git pull origin master But when I push to origin from another computer, I get the error: remote: fatal: Not a git repository: '/home/<user>/<test_repo>' However, test_repo most definitely is a git repo. I can cd into it and run 'git pull origin master' and it works fine. Is there an easier way to do what I'm trying to do? If not, what am I doing wrong with this approach? Thanks in advance. Edit, to clarify: The repo is a website in progress, and I'd like to have a version of it available at all times that is fully up to date.

    Read the article

  • Using LINQ to Twitter OAuth with Windows 8

    - by Joe Mayo
    In previous posts, I explained how to use LINQ to Twitter with Windows 8, but the example was a Twitter Search, which didn’t require authentication. Much of the Twitter API requires authentication, so this post will explain how you can perform OAuth authentication with LINQ to Twitter in a Windows 8 Metro-style application. Getting Started I have earlier posts on how to create a Windows 8 app and add pages, so I’ll assume it isn’t necessary to repeat here. One difference is that I’m using Visual Studio 2012 RC and some of the terminology and/or library code might be slightly different.  Here are steps to get started: Create a new Windows metro style app, selecting the Blank App project template. Create a new Basic Page and name it OAuth.xaml.  Note: You’ll receive a prompt window for adding files and you should click Yes because those files are necessary for this demo. Add a new Basic Page named TweetPage.xaml. Open App.xaml.cs and change !rootFrame.Navigate(typeof(MainPage)) to !rootFrame.Navigate(typeof(TweetPage)). Now that the project is set up you’ll see the reason why authentication is required by setting up the TweetPage. Setting Up to Tweet a Status In this section, I’ll show you how to set up the XAML and code-behind for a tweet.  The tweet logic will check to see if the user is authenticated before performing the tweet. To tweet, I put a TextBox and Button on the XAML page. The following code omits most of the page, concentrating primarily on the elements of interest in this post: <StackPanel Grid.Row="1"> <TextBox Name="TweetTextBox" Margin="15" /> <Button Name="TweetButton" Content="Tweet" Click="TweetButton_Click" Margin="15,0" /> </StackPanel> Given the UI above, the user types the message they want to tweet, and taps Tweet. This invokes TweetButton_Click, which checks to see if the user is authenticated.  If the user is not authenticated, the app navigates to the OAuth page.  If they are authenticated, LINQ to Twitter does an UpdateStatus to post the user’s tweet.  Here’s the TweetButton_Click implementation: void TweetButton_Click(object sender, RoutedEventArgs e) { PinAuthorizer auth = null; if (SuspensionManager.SessionState.ContainsKey("Authorizer")) { auth = SuspensionManager.SessionState["Authorizer"] as PinAuthorizer; } if (auth == null || !auth.IsAuthorized) { Frame.Navigate(typeof(OAuthPage)); return; } var twitterCtx = new TwitterContext(auth); Status tweet = twitterCtx.UpdateStatus(TweetTextBox.Text); new MessageDialog(tweet.Text, "Successful Tweet").ShowAsync(); } For authentication, this app uses PinAuthorizer, one of several authorizers available in the LINQ to Twitter library. I’ll explain how PinAuthorizer works in the next section. What’s important here is that LINQ to Twitter needs an authorizer to post a Tweet. The code above checks to see if a valid authorizer is available. To do this, it uses the SuspensionManager class, which is part of the code generated earlier when creating OAuthPage.xaml. The SessionState property is a Dictionary<string, object> and I’m using the Authorizer key to store the PinAuthorizer.  If the user previously authorized during this session, the code reads the PinAuthorizer instance from SessionState and assigns it to the auth variable. If the user is authorized, auth would not be null and IsAuthorized would be true. Otherwise, the app navigates the user to OAuthPage.xaml, which I’ll discuss in more depth in the next section. When the user is authorized, the code passes the authorizer, auth, to the TwitterContext constructor. LINQ to Twitter uses the auth instance to build OAuth signatures for each interaction with Twitter.  You no longer need to write any more code to make this happen. The code above accepts the tweet just posted in the Status instance, tweet, and displays a message with the text to confirm success to the user. You can pull the PinAuthorizer instance from SessionState, instantiate your TwitterContext, and use it as you need. Just remember to make sure you have a valid authorizer, like the code above. As shown earlier, the code navigates to OAuthPage.xaml when a valid authorizer isn’t available. The next section shows how to perform the authorization upon arrival at OAuthPage.xaml. Doing the OAuth Dance This section shows how to authenticate with LINQ to Twitter’s built-in OAuth support. From the user perspective, they must be navigated to the Twitter authentication page, add credentials, be navigated to a Pin number page, and then enter that Pin in the Windows 8 application. The following XAML shows the relevant elements that the user will interact with during this process. <StackPanel Grid.Row="2"> <WebView x:Name="OAuthWebBrowser" HorizontalAlignment="Left" Height="400" Margin="15" VerticalAlignment="Top" Width="700" /> <TextBlock Text="Please perform OAuth process (above), enter Pin (below) when ready, and tap Authenticate:" Margin="15,15,15,5" /> <TextBox Name="PinTextBox" Margin="15,0,15,15" Width="432" HorizontalAlignment="Left" IsEnabled="False" /> <Button Name="AuthenticatePinButton" Content="Authenticate" Margin="15" IsEnabled="False" Click="AuthenticatePinButton_Click" /> </StackPanel> The WebView in the code above is what allows the user to see the Twitter authentication page. The TextBox is for entering the Pin, and the Button invokes code that will take the Pin and allow LINQ to Twitter to complete the authentication process. As you can see, there are several steps to OAuth authentication, but LINQ to Twitter tries to minimize the amount of code you have to write. The two important parts of the code to make this happen are the part that starts the authentication process and the part that completes the authentication process. The following code, from OAuthPage.xaml.cs, shows a couple events that are instrumental in making this process happen: public OAuthPage() { this.InitializeComponent(); this.Loaded += OAuthPage_Loaded; OAuthWebBrowser.LoadCompleted += OAuthWebBrowser_LoadCompleted; } The OAuthWebBrowser_LoadCompleted event handler enables UI controls when the browser is done loading – notice that the TextBox and Button in the previous XAML have their IsEnabled attributes set to False. When the Page.Loaded event is invoked, the OAuthPage_Loaded handler starts the OAuth process, shown here: void OAuthPage_Loaded(object sender, RoutedEventArgs e) { auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => OAuthWebBrowser.Navigate(new Uri(pageLink, UriKind.Absolute))) }; auth.BeginAuthorize(resp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (resp.Status) { case TwitterErrorStatus.Success: break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(resp.Error.ToString(), resp.Message).ShowAsync(); break; } })); } The PinAuthorizer, auth, a field of this class instantiated in the code above, assigns keys to the Credentials property. These are credentials that come from registering an application with Twitter, explained in the LINQ to Twitter documentation, Securing Your Applications. Notice how I use Dispatcher.RunAsync to marshal the web browser navigation back onto the UI thread. Internally, LINQ to Twitter invokes the lambda expression assigned to GoToTwitterAuthorization when starting the OAuth process.  In this case, we want the WebView control to navigate to the Twitter authentication page, which is defined with a default URL in LINQ to Twitter and passed to the GoToTwitterAuthorization lambda as pageLink. Then you need to start the authorization process by calling BeginAuthorize. This starts the OAuth dance, running asynchronously.  LINQ to Twitter invokes the callback assigned to the BeginAuthorize parameter, allowing you to take whatever action you need, based on the Status of the response, resp. As mentioned earlier, this is where the user performs the authentication process, enters the Pin, and clicks authenticate. The handler for authenticate completes the process and saves the authorizer for subsequent use by the application, as shown below: void AuthenticatePinButton_Click(object sender, RoutedEventArgs e) { auth.CompleteAuthorize( PinTextBox.Text, completeResp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (completeResp.Status) { case TwitterErrorStatus.Success: SuspensionManager.SessionState["Authorizer"] = auth; Frame.Navigate(typeof(TweetPage)); break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(completeResp.Error.ToString(), completeResp.Message).ShowAsync(); break; } })); } The PinAuthorizer CompleteAuthorize method takes two parameters: Pin and callback. The Pin is from what the user entered in the TextBox prior to clicking the Authenticate button that invoked this method. The callback handles the response from completing the OAuth process. The completeResp holds information about the results of the operation, indicated by a Status property of type TwitterErrorStatus. On success, the code assigns auth to SessionState. You might remember SessionState from the previous description of TweetPage – this is where the valid authorizer comes from. After saving the authorizer, the code navigates the user back to TweetPage, where they can type in a message, click the Tweet button, and observe that they have successfully tweeted. Summary You’ve seen how to get started with using LINQ to Twitter in a Metro-style application. The generated code contained a SuspensionManager class with way to manage information across multiple pages via its SessionState property. You also saw how LINQ to Twitter performs authorization in two steps of starting the process and completing the process when the user provides a Pin number. Remember to marshal callback thread back onto the UI – you saw earlier how to use Dispatcher.RunAsync to accomplish this. There were a few steps in the process, but LINQ to Twitter did minimize the amount of code you needed to write to make it happen. You can download the MetroOAuthDemo.zip sample on the LINQ to Twitter Samples Page.   @JoeMayo

    Read the article

  • Limitations of the SharePoint join using CAML

    - by ybbest
    Limitation One In SharePoint 2010, you can join the primary list to a foreign list and include more than one field from the foreign list. However, the limitation is that the included fields from foreign list have to be the following type: Calculated (treated as plain text) ContentTypeId Counter Currency DateTime Guid Integer Note (one-line only) Number Text The above limitation also explains why you cannot include some types of the fields from the remote list when creating a lookup. Limitation Two When using CAML query to join SharePoint lists, there can be joins to multiple lists, multiple joins to the same list, and chains of joins. However, the limitations are only inner and left outer joins are permitted and the field in the primary list must be a Lookup type field that looks up to the field in the foreign list. Limitation Three The support for writing the JOIN query in CAML is very limited.I have to hand-code the CAML query to join the lists,not fun at all.Although some blogs post mentioned about using LINQ to SharePoint and get the CAML code from there , but I never get it to work.You can check this blog post  for this.Let me know if it works for you. References: http://msdn.microsoft.com/en-us/library/ee535502.aspx http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.joins.aspx

    Read the article

  • ArchBeat Link-o-Rama for December 4, 2012

    - by Bob Rhubart
    Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups | The Old Toxophilist "Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers," says The Old Toxopholist, "there are cases where it is extremely important that vServers run on distinct physical compute nodes." There's plenty more on the subject in his blog post. Oracle Endeca (2.3) Record Level Security | Adam Seed Adam Sneed's blog post covers "the basics of security within Endeca Information Discovery, as these basic security objects are required in order to explain the implementation of record level security." ODI Handling DQ | Gurcan Orhan Oracle ACE Director Gurcan Orhan suggests you have fun with these scripts for Oracle Data Integrator. Parleys Testimonial at GlassFish Community Event - JavaOne 2012 Video of Parley's webmaster Stephan Janssen's presentation at the GlassFish Community Event at JavaOne 2012, in which he explains why Parley's moved from Tomcat to GlassFish. Java Spotlight Episode 109: Pete Muir on CDI 1.1 This edition of Roger Brinkley's Java Spotlight Podcast features an interview with CDI 1.1 spec lead Pete Muir of JBoss/Red Hat. Muir talks about the features in CDI 1.1 and what to expect in the future. Webcast: Java Management Extensions with Oracle WebLogic Server 12c Dr. Frank Munz and Dave Cabelus do the talking in this on-demand webcast focused on Oracle WebLogic Server 12c with Java Management Extensions (JMX). Using the Coherence API to get Portable Object Format bytes | Bruno Borges Bruno Borges shares a code snippet that illustrates how easy it is to use the Coherence API. Thought for the Day "Experience is something you don't get until just after you need it." — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Procurement Community Live Chat December 13th on Implementing AME with Purchase Orders

    - by LuciaC
    Do you still have questions on setting up the Approval Management Engine (AME) for Purchase Orders after attending the AME Webcast? Or maybe you are new to this topic and need more information?  Don’t miss our Procurement Community Live Chat on December 13th from 7:00 am to 11:00 am EST. Proactive Support and Development will be available to answer questions.You can access the main Oracle Communities page at http://communities.oracle.com (If you are enrolled, the Procurement community will be listed on your left.   If you're not already enrolled in the Procurement community, you can do so by clicking on the link Edit Subscriptions). OR you can use these alternative steps:From "My Oracle Support" as follows:    Log into My Oracle Support.    Click on the 'Community' link at the top of the page.    Click in 'Find a Community' field and enter Procurement.    Double click on Procurement in the list.    Click on the 'Create a Community Post' button and submit your question.Then in the ‘Procurement Featured Discussions’ section, drill down on the following thread and post your question.  We are looking forward to hearing from you!

    Read the article

  • What different ways are there to model restitution in a physics engine?

    - by Mikael Högström
    In my physics engine I give a body a value for restitution between 0 and 1. When two bodies collide there seems to be different views on how the restitution of the collision should be calculated. To me the most intuitive seems to be to take the average of the two but some seem to take only the largest one. Are there other ways to do it? Also, could the closing velocity or some other parameter come into effect?

    Read the article

  • Should I stay in my degree or take an opportunity for management experience?

    - by Adam
    I've read a couple other post along these lines and they've been helpful but I'm wondering if my case is any different. I've been working towards my CS degree while working part time in a programming job. I'm now about two years away from getting my degree and was just offered a management position at my job. This would mean that I have to work full-time at my job and I can't really work towards my degree anymore in person. My school doesn't really offer CS classes after hours nor online. It seems that getting a degree is very important from the other post that I read. Does having management experience trump that? I'm currently leaning towards taking the job and finding some sort of online degree. Also my school only offers a business degree online, could I just get this in place. Does the type of degree really matter? For some jobs it's not the type of degree just that you have one, is there any merit for this in the programming industry? Thanks :)

    Read the article

  • Wiki Application With A Reputation System

    - by Christofian
    I'm really impressed with Stack Exchange's concept of reputation (you gain reputation as you post, and the more you post, the more privileges you get), and I want to apply the concept to a wiki that I am building. Does anyone know of a php wiki that has a concept of privileges/reputation similar to Stack Exchange? I'm not necessarily looking for something identical to SE, I'm just looking for a wiki application that gives users more privileges the more they contribute positively to the wiki (SE has down votes, the wiki should have some way of identifying negative contributions too). The privileges should be category based, so the more active you are in a specific category or page, the more privileges you get for that category. There should also be site wide privileges as well, though those should be harder to access than the category privileges. NOTE: If it is not possible to get category wide privileges and site wide privileges, I will be OK with just category wide privileges or just site wide privileges. I should be able to change the requirements for each privilege, through a administration panel or through editing a file (some wiki applications don't have administration interfaces). Does anyone have a script or a solution that will do this? If the script uses something similar to reputation to determine how much a user has positively contributed to the site, then that is OK too. Please Note: I am looking for a way to rate individual user contributions, not a way to rate the quality of an entire page.

    Read the article

  • Why do I always get this error when using 'apt-get' commands?

    - by Venki
    I am using Ubuntu 14.04(with Unity). Just today(as of the date of this post) I did a sudo apt-get update && sudo apt-get upgrade and at the end of the 'Upgrade' process I got the following error :- Setting up crossplatformui (1.0.38) ... * Stopping ACPI services... [ OK ] * Starting ACPI services... [ OK ] package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.13.0-27-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.13.0-27-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory #include <linux/smp_lock.h> ^ compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-27-generic' make: *** [modules] Error 2 dpkg: error processing package crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) From then on whatever apt-get command I use(so far as I know, except apt-get update) I keep getting the above error at the end of the process. But whichever apt-get command I use does what it has to without fail.(For example I tried installing blender with sudo apt-get install blender and it installed fine though it showed the above error.) After this I even got a kernel update(from 3.13.0-27 to 3.13.0-29 via the Software Updater), but even now the issue persists. How do I solve this issue?

    Read the article

  • How to fix "Disk drive for /boot/efi is not ready or not present"?

    - by N.N.
    After I updated BIOS/UEFI version to 1101 on an Asus P8Z68-V PRO motherboard Ubuntu (11.10) did not boot. After POST all I saw was a black screen with a blinking cursor in the top left corner. I booted an Ubuntu 11.10 live-CD and set the flag for the 20 MB partition before my boot partition to "bios_grub". Then I was able to boot and login. But now every time I boot and Ubuntu loads I get the following message: Disk drive for /boot/efi is not ready or not present. Continue waiting or press s to skip or m for manual recovery. I am able to login if I choose to ignore it by pressing s, but what does this message mean? How can I fix what the message warns about? After logging in I have noticed that /boot/efi is empty. The following forum post speaks of the same issue ubuntuforums.org/showthread.php?t=1893030. Updating to the latest BIOS/UEFI - version 3203, did not have any effect on this issue.

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • Avoid SQL Injection with Parameters

    - by simonsabin
    The best way to avoid SQL Injection is with parameters. With parameters you can’t get SQL Injection. You only get SQL Injection where you are building a SQL statement by concatenating your parameter values in with your SQL statement. Annoyingly many TSQL statements don’t take parameters, CREATE DATABASE for instance, or really annoyingly ALTER USER. In these situations you have to rely on using QUOTENAME or REPLACE to avoid SQL Injection. (Kimberly Tripp takes about this in her recent blog post Little...(read more)

    Read the article

  • Having troubles with LibNoise.XNA and generating tileable maps

    - by Jon
    Following up on my previous post, I found a wonderful port of LibNoise for XNA. I've been working with it for about 8 hours straight and I'm tearing my hair out - I just can not get maps to tile, I can't figure out how to do this. Here's my attempt: Perlin perlin = new Perlin(1.2, 1.95, 0.56, 12, 2353, QualityMode.Medium); RiggedMultifractal rigged = new RiggedMultifractal(); Add add = new Add(perlin, rigged); // Initialize the noise map int mapSize = 64; this.m_noiseMap = new Noise2D(mapSize, perlin); //this.m_noiseMap.GeneratePlanar(0, 1, -1, 1); // Generate the textures this.m_noiseMap.GeneratePlanar(-1,1,-1,1); this.m_textures[0] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(mapSize, mapSize * 2, mapSize, mapSize * 2); this.m_textures[1] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(-1, 1, -1, 1); this.m_textures[2] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); The first and third ones generate fine, they create a perlin noise map - however the middle one, which I wanted to be a continuation of the first (As per my original post), is just a bunch of static. How exactly do I get this to generate maps that connect to each other, by entering in the mapsize * tile, using the same seed, settings, etc.?

    Read the article

  • Tree Surgeon 2.0 - The future on the T4 Express

    - by Malcolm Anderson
    If you've never been a fan of TreeSurgeon (http://treesurgeon.codeplex.com/) then skip this post.However, if have been there have been some interesting developments over the last couple of years.The biggest one is T4Recently Bill Simser wrote a detailed post about the potential future of tree surgeon, called "Tree Surgeon - Alive and Kicking or Dead and Buried" He raised the question:Times have changed. Since that last release in 2008 so much has changed for .NET developers. The question is, today is the project still viable? Do we still need a tool to generate a project tree given that we have things like scaffolding systems, NuGet, and T4 templates. Or should we just give the project its rightful and respectful send off as its had a good life and has outlived its usefulness.For myself, the answer is, keep it.I've spent the last couple of years doing agile engineering coaching and architecture and from my experience, I can tell you, there are a lot of shops out there that would benefit from having Tree Surgeon as a viable product.  Many would benefit simply from having the software engineering information that is embedded in the tree surgeon site be floating around their conversation.Little things like, keep all of your software needed to run the build, with the build in the version control system.Have your developers and the build system using the same build.Have a one-touch buildSeparate your code from your interfacePut unit tests in first, not lastI've seen companies with great developers suffer from the problems that naturally come from builds taking 3 and 4 hours to run.  It takes work to get that build down to 10 minutes, but the benefits are always worth it.  Tree Surgeon gives you a leg up, by starting you off with a project that you can drop into your Continuous Integration system, right out of the box.Well, it used to be right out of the box.  Today, you have to play with the project to make it work for you, but even with the issues (it hasn't been updated since 2008) it still gives you a framework, with logical separations that you can build from.If you have used Tree Surgeon in the past, take a few minutes and drop a comment about what difference it made in your development style, and what you are doing differently today because of it.

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • Live programming help

    - by frazras
    This idea has been floating around my head for a few years. I started some work on it but I just want to know if it is feasible, sensible, or if there is something else like it out there. Dont want to know I was wasting time on a solved issue. Whenever I have a programming issue, this is my sequence: Google it!: That usually brings up a lot of things: blogs, forums, stackoverflow, stackexchange, and even the official docs of the language/framework/cms. Ask on IRC: I format my question and try to get people on IRC to help me. Make a post: I create a post on forums/stackoverflow/stackexchange or shout on twitter with hashtags. Now a lot of the time I am in the middle of a project with a deadline. So I want answers NOW!!! Sometimes just 5-15 minutes worth of attention. Usually by the time I am failing at getting answers at #2, I am imagining how many people are ONLINE NOW with the skill and my exact answer but playing video games, watching youtube or idling online. However, if they were motivated, they would invest the 15 mintes helping me, that would make a world of a difference. I am even in positions where I would PAY for that 15 minutes of instant help. If your rate is as much as $100/hour (relatively good programmer) that is $25 that might save me 3 hours. This help would be live, text chat/skype/phone/screenshare. Should I continue developing this idea or is there a better alternative out there? Or is this even an unfeasible idea?

    Read the article

  • Reading OpenDocument spreadsheets using C#

    - by DigiMortal
    Excel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#. Importer as example My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go. public class OdsImporter : ImporterBase {     public OdsImporter()     {     }       public override string[] SupportedFileExtensions     {         get { return new[] { "ods" }; }     }       public override ImportResult Import(Stream fileStream, long companyId, short year)     {         string contentXml = GetContentXml(fileStream);           var result = new ImportResult();         var doc = XDocument.Parse(contentXml);           var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1);           foreach (var row in rows)         {             ImportRow(row, companyId, year, result);         }           return result;     } } The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects). Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row. Reading ODS file Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file. private static string GetContentXml(Stream fileStream) {     var contentXml = "";       using (var zipInputStream = new ZipInputStream(fileStream))     {         ZipEntry contentEntry = null;         while ((contentEntry = zipInputStream.GetNextEntry()) != null)         {             if (!contentEntry.IsFile)                 continue;             if (contentEntry.Name.ToLower() == "content.xml")                 break;         }           if (contentEntry.Name.ToLower() != "content.xml")         {             throw new Exception("Cannot find content.xml");         }           var bytesResult = new byte[] { };         var bytes = new byte[2000];         var i = 0;           while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0)         {             var arrayLength = bytesResult.Length;             Array.Resize<byte>(ref bytesResult, arrayLength + i);             Array.Copy(bytes, 0, bytesResult, arrayLength, i);         }         contentXml = Encoding.UTF8.GetString(bytesResult);     }     return contentXml; } If here is content.xml file then we stop browsing the file. We read this file to memory and return it as UTF-8 format string. Importing rows Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection. private void ImportRow(XElement row, ImportResult result) {     var cells = (from c in row.Descendants()                 where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell"                 select c).ToList();       var dto = new DataDto();       var count = cells.Count;     var j = -1;       for (var i = 0; i < count; i++)     {         j++;         var cell = cells[i];         var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated");         if (attr != null)         {             var numToSkip = 0;             if (int.TryParse(attr.Value, out numToSkip))             {                 j += numToSkip - 1;             }         }           if (i > 30) break;         if (j == 0)         {             dto.SomeProperty = cells[i].Value;         }         if (j == 1)         {             dto.SomeOtherProperty = cells[i].Value;         }         // some more data reading     }       // save data } You can define your own class for import results and add there all problems found during data import. Your application gets the results and shows them to user. Conclusion Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.

    Read the article

  • Looking into the jQuery LazyLoad Plugin

    - by nikolaosk
    I have been using JQuery for a couple of years now and it has helped me to solve many problems on the client side of web development.  You can find all my posts about JQuery in this link. In this post I will be providing you with a hands-on example on the JQuery LazyLoad Plugin.If you want you can have a look at this post, where I describe the JQuery Cycle Plugin.You can find another post of mine talking about the JQuery Carousel Lite Plugin here. Another post of mine regarding the JQuery Image Zoom Plugin can be found here. You can have a look at the JQuery Overlays Plugin here . There are times when when I am asked to create a very long page with lots of images.My first thought is to enable paging on the proposed page. Imagine that we have 60 images on a page. There are performance concerns when we have so many images on a page. Paging can solve that problem if I am allowed to place only 5 images on a page.Sometimes the customer does not like the idea of the paging.Believe it or not some people find the idea of paging not attractive at all.In that case I need a way to only load the initial set of images and as the user scrolls down the page to load the rest.So as someone scrolls down new requests are made to the server and more images are fetched. I can accomplish that with the jQuery LazyLoad Plugin.This is just a plugin that delays loading of images in long web pages.The images that are outside of the viewport (visible part of web page) won't be loaded before the user scrolls to them. Using jQuery LazyLoad Plugin on long web pages containing many large images makes the page load faster. In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like. You can use Visual Studio 2012 Express edition. You can download it here.  You can download this plugin from this link. I launch Expression Web 4.0 and then I type the following HTML markup (I am using HTML 5)<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>    <script type="text/javascript" src="jquery-1.8.3.min.js"></script>        <script type="text/javascript" src="jquery.lazyload.min.js" ></script></head>  <body>    <header>                <h1>Liverpool Legends</h1>    </header>        <div id="main">             <img src="barnes.JPG" width="800" height="1100" /><p />        <img src="dalglish.JPG" width="800" height="1100" /><p />                <img class="LiverpoolImage" src="loader.gif" data-original="fans.JPG" width="1200" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="lfc.JPG" width="1000" height="700" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="Liverpool-players.JPG" width="1100" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="steven_gerrard.JPG" width="1110" height="1000" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="robbie.JPG" width="1200" height="1000" /><p />          </div>            <footer>        <p>All Rights Reserved</p>      </footer>                    <script type="text/javascript">                $(function () {                    $("img.LiverpoolImage").lazyload();                });        </script>     </body>  </html> This is a very simple markup. I have  added references to the JQuery library (current version is 1.8.3) and the JQuery LazyLoad Plugin. Firstly, I add two images         <img src="barnes.JPG" width="800" height="1100" /><p />        <img src="dalglish.JPG" width="800" height="1100" /><p />  that will load immediately as soon as the page loads. Then I add the images that will not load unless they become active in the viewport. I have all my img tags pointing the src attribute towards a placeholder image. I’m using a blank 1×1 px grey image,loader.gif.The five images that will load as the user scrolls down the page follow.         <img class="LiverpoolImage" src="loader.gif" data-original="fans.JPG" width="1200" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="lfc.JPG" width="1000" height="700" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="Liverpool-players.JPG" width="1100" height="900" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="steven_gerrard.JPG" width="1110" height="1000" /><p />        <img class="LiverpoolImage" src="loader.gif" data-original="robbie.JPG" width="1200" height="1000" /><p /> Then we need to rename the image src to point towards the proper image placeholder. The full image URL goes into the data-original attribute.The Javascript code that makes it all happen follows. We need to make a call to the JQuery LazyLoad Plugin. We add the script just before we close the body element.         <script type="text/javascript">                $(function () {                    $("img.LiverpoolImage").lazyload();                });        </script>We can change the code above to incorporate some effects.          <script type="text/javascript">  $("img.LiverpoolImage").lazyload({    effect: "fadeIn"  });    </script> That is all I need to write to achieve lazy loading. It it true that you can do so much with less!!I view my simple page in Internet Explorer 10 and it works as expected. I have tested this simple solution in all major browsers and it works fine. You can test it yourself and see the results in your favorite browser. Hope it helps!!!

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >