Search Results

Search found 19962 results on 799 pages for 'edo post'.

Page 407/799 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • How to create a "retro" pixel shader for transformed 2D sprites that maintains pixel fidelity?

    - by David Gouveia
    The image below shows two sprites rendered with point sampling on top of a background: The left skull has no rotation/scaling applied to it, so every pixel matches perfectly with the background. The right skull is rotated/scaled, and this results in larger pixels that are no longer axis aligned. How could I develop a pixel shader that would render the transformed sprite on the right with axis aligned pixels of the same size as the rest of the scene? This might be related to how sprite scaling was implemented in old games such as Monkey Island, because that's the effect I'm trying to achieve, but with rotation added. Edit As per kaoD's suggestions, I tried to address the problem as a post-process. The easiest approach was to render to a separate render target first (downsampled to match the desired pixel size) and then upscale it when rendering a second time. It did address my requirements above. First I tried doing it Linear -> Point and the result was this: There's no distortion but the result looks blurred and it loses most of the highlights colors. In my opinion it breaks the retro look I needed. The second time I tried Point -> Point and the result was this: Despite the distortion, I think that might be good enough for my needs, although it does look better as a still image than in motion. To demonstrate, here's a video of the effect, although YouTube filtered the pixels out of it: http://youtu.be/hqokk58KFmI However, I'll leave the question open for a few more days in case someone comes up with a better sampling solution that maintains the crisp look while decreasing the amount of distortion when moving.

    Read the article

  • Autocomplete in Silverlight with Visual Studio 2010

    - by Sayre Collado
    Last week I keep searching on how to use the autocomplete in silverligth with visual studio 2010 but most of the examples that I find they are using a textbox or combobox for the autocomplete. I tried to study those examples and apply to the single autocomplete from tools on my silverlight project. And now this is the result. I will use a database again from my previous post (Silverlight Simple DataBinding in DataGrid) to show how the autocomplete works with database. This is the output: First, this is the setup for my autocomplete: //The tags for autocompletebox on XAML Second, my simple snippets: //Event for the autocomplete to send a text string to my function private void autoCompleteBox1_KeyUp(object sender, KeyEventArgs e) { autoCompleteBox1.Populating += (s, args) => { args.Cancel = true; var c = new Service1Client(); c.GetListByNameCompleted +=new EventHandler(c_GetListByNameCompleted); c.GetListByNameAsync(autoCompleteBox1.Text); }; } //Getting result from database void c_GetListByNameCompleted(object sender, GetListByNameCompletedEventArgs e) { autoCompleteBox1.ItemsSource = e.Result; autoCompleteBox1.PopulateComplete(); } The snippets above will show on how to use the autocompleteBox using the data from database that bind in DataGrid. But what if we want to show the result on DataGrid while the autocomplete changing the items source? Ok just add one line to c_GetListByNameCompleted void c_GetListByNameCompleted(object sender, GetListByNameCompletedEventArgs e) { autoCompleteBox1.ItemsSource = e.Result; autoCompleteBox1.PopulateComplete(); dataGrid1.ItemsSource = e.Result; }

    Read the article

  • mediatomb fails with "respawning too fast, stopped"

    - by felix
    When I try to start mediatomb it fails. I see this in dmesg [...] [916349.374331] init: mediatomb main process ended, respawning [916349.394462] init: mediatomb main process (880) terminated with status 1 [916349.394512] init: mediatomb main process ended, respawning [916349.414598] init: mediatomb main process (882) terminated with status 1 [916349.414647] init: mediatomb respawning too fast, stopped My current /etc/init/mediatomb.conf looks like this. description "MediaTomb UPnP media server" author "Daniel van Vugt <vanvugt in launchpad>" start on (local-filesystems and net-device-up IFACE!=lo) stop on runlevel [!2345] respawn env CONFIGXML=/etc/mediatomb/config.xml env LOGFILE=/var/log/mediatomb.log env DEFAULT=/etc/default/mediatomb script [ -r $DEFAULT ] && . $DEFAULT [ ! $USER ] && USER=root [ ! $GROUP ] && GROUP=$USER if [ -n "$INTERFACE" ]; then INTERFACE_ARG="-e $INTERFACE" $ROUTE_ADD $INTERFACE fi exec mediatomb \ -c $CONFIGXML \ -u $USER \ -g $GROUP \ -l $LOGFILE \ $INTERFACE_ARG \ $OPTIONS end script post-stop script [ -r $DEFAULT ] && . $DEFAULT if [ -n "$INTERFACE" ]; then $ROUTE_DEL $INTERFACE fi end script

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by [email protected]
    By Brian Dayton on April 12, 2010 11:15 AM Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: · "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." · "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." · "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.

    Read the article

  • basic beginning emacs questions - install latest version and pick appropriate UI

    - by MountainX
    I'm running the latest Kubuntu (12.04 beta 2) and I would like to run the latest emacs (currently v24). The repos are one version behind. What's the best way to install v24 or later (and avoid future version conflicts)? Also, is there any reason not to aways use the GUI version of emacs if X is running? For example, could I set the GUI emacs version as the default text editor and use it to edit cron jobs (crontab -e)? I'm assuming the answer is yes, but since I haven't done that yet (my default editor is nano), I want to check if there are reasons I should leave nano as the default editor. Usually when I'm working on the command line I end up using nano. Now that I think about it, I have no idea why I keep doing that. Is there any downside to calling a GUI editor when working in an X terminal? EDIT: I briefly tested these two versions GNU Emacs 24.0.94.1 (x86_64-pc-linux-gnu, GTK+ Version 3.3.20) from GNU Emacs 23.3.1 (x86_64-pc-linux-gnu) installed by default in Kubuntu. This post explains some of the differences between versions. Unfortunately (for me) the defaults installed version (23.3.1, 23.3+1-1ubuntu9) is the nox version. Package: emacs23-nox Status: install ok installed Version: 23.3+1-1ubuntu9 Replaces: emacs23, emacs23-gtk, emacs23-lucid The package with version 24 opens in GUI mode by default. That's what I prefer. Some of the version 24 changes that interest me are listed in the references below. But there appear to be a multitude of different packages and versions I could install. References: What’s New In Emacs 24 (part 1) | Mastering Emacs http://www.masteringemacs.org/articles/2011/12/06/what-is-new-in-emacs-24-part-1/ " shell-mode uses pcomplete rules, with the standard completion UI. Yowzah! There’s a lot of cool, new functionality hidden away in this gem of a change." EmacsWiki: Recent Changes http://www.emacswiki.org/emacs/?action=rc;showedit=0

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • SQL SERVER – A Cool Trick – Restoring the Default SQL Server Management Studio – SSMS

    - by pinaldave
    “I do not know where my windows went!” “I just closed my object explorer and now I cannot find it.” “How do I get my original windows layout back in SQL Server Management Studio?” “How do I get the window which was there in left side back again?” Since last 2-3 years, every single day I receive more than 5 emails on SSMS and its layout. For the beginners it is very common to get confused when they attempt to change SQL Server Management Studio’s windows layout. They often change the layout and are not able to get the original layout back. Often people do not change the layout whole of their life, leading to uncomfortable feeling when they go to another’s computer where the windows are differently placed. Today’s blog post is dedicated all the beginners in SQL Server. It is extremely simple to reset the SSMS layout to default layout. The default layout involves 2 major things 1) Object Explorer on left side 2) Query Windows on right side (80% screen estate). Personally I am so used to this as well that if there is any other changes in the same, I do not enjoy working on the environment. Well, the solution to rest the SSMS layout is very simple. One can do it in split seconds.  To restore the default configuration, on the Window menu, click Reset Window Layout. Have you ever used this feature? Do you feel uncomfortable when SSMS layout is not in default state? How do you address this situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Bay Area JUG Roundup 2010 - A Good Time for All

    - by Justin Kestelyn
    The first Bay Area JUG Roundup (#roundup10) convened at Oracle HQ on Wednesday evening, in the palatial surroundings of the Oracle Conference Center. (Yes, there will be more!) A couple hundred people were there, I'd say. More came out of this meetup than a bunch of new contacts and some mild indigestion (or even a mild hangover): - We (meaning, Oracle) announced the opening of the eight annual Duke's Choice Awards. As described on the Web page, "The awards celebrate extreme innovation in the world of Java technology and are granted to the best and most innovative projects using the Java platform." Entries will be accepted through July 1, with winners announced at JavaOne 2010. - Even more exciting, we offered a sneak preview of the Java Road Trip, a cross-country, 20-stop bus tour this Summer involving one rock-star bus, one full-time blogger/videographer, a whole bunch of Java demos and speakers, and lots of beer and prizes. Stay tuned for more info about this. - Sonya Barry, Java.net community manager, announced the beta.java.net project - which will be the end result of the java.net migration to a Kenai back-end and retooled social/community layer (already in progress). Sonya also announced that Maven support for Java.net projects is imminent, with just a contract to be signed in the next couple of weeks. Finally, we were all treated to a typically hilarious Java Posse appearance. Arun Gupta has posted photos as well as meetup slideware at his blog. And as soon as the video replay (thanks, Steve Chin) and Java Posse podcasts are available, I'll post links to those here too.

    Read the article

  • SEO Keyword Research Help

    - by James
    Hi Everyone, I'm new at SEO and keyword research. I am using Market Samurai as my research tool, and I was wondering if I could ask for your help to identify the best key word to target for my niche. I do plan on incorporating all of them into my site, but I wanted to start with one. If you could give me your input on these keywords, I would appreciate it. This is all new to me :) I'm too new to post pictures, but here are my keywords (Searches, SEO Traffic, and SEO Value / Day): Searches | SEO Traffic | PBR | SEO Value | Average PR/Backlinks of Current Top 10 1: 730 | 307 | 20% | 2311.33 | 1.9 / 7k-60k 2: 325 | 137 | 24% | 822.94 | 2.3 / 7k-60k 3: 398 | 167 | 82% | 589.79 | 1.6 / 7k-60k I'm wondering if the PBR (Phrase-to-broad) value of #1 is too low. It seems like the best value because the SEOV is crazy high. That is like $70k a month. #3 has the highest PBR, but also the lowest SEOV. #2 doesn't seem worth it because of the PR competetion. Might be a little too hard to get into the top page of Google. I'm wondering which keywords to target, and if I should be looking at any other metric to see if this is a profitable niche to jump into. Thanks.

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • SSH onto Ubuntu box using RSA keys

    - by jex
    I recently installed OpenSSH on one of my Ubuntu machines and I've been running into problems getting it to use RSA keys. I've generated the RSA key on the client (ssh-keygen), and appended the public key generated to both the /home/jex/.ssh/authorized_keys and /etc/ssh/authorized_keys files on the server. However, when I try to login (ssh -o PreferredAuthorizations=publickey jex@host -v [which forces the use of public key for login]) I get the following output: debug1: Host 'pentheon.local' is known and matches the RSA host key. debug1: Found key in /home/jex/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received Banner message debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /home/jex/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /home/jex/.ssh/identity debug1: Trying private key: /home/jex/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey,keyboard-interactive). I'm not entirely sure where I've gone wrong. I am willing to post my /etc/ssh/sshd_config if needed.

    Read the article

  • Are there any statistics on the number of networks using Ethernet Jumbo frames?

    - by Russell Heilling
    I am often told by Sales Engineers and Product managers that a Layer 2 Ethernet service is not fit for purpose if the maximum supported frame size is in the 1518-1522 range (enough to support standard Ethernet frames or VLAN tagged frames). Or in other words: an MTU of 1500 is not enough (see this blog post for my definition of MTU and a short rant on how the term is often misused) I am never able to get any details from them on: a) What proportion of customers (Enterprise / SMB) require Jumbo frames b) What are typical expectations on frame size for Jumbos on WAN links? 1600, 2000, 9k, etc... I know that in Telco and Data Centre environments Jumbos are pretty common, but I am after some insight as to how common this is within Enterprise and SMB networks.

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Internet Explorer 9 Preview 2 link + webcasts for developers

    - by Eric Nelson
    At Web Directions last week in London (10th and 11th June 2010) I promised several folks I would put up a blog post to more information on IE 9.0. True to my word (albeit a little later than I had hoped), here is what I was thinking of: Install First up, Install Preview 2 and try out the demos I was showing at the conference. Remember that IE9 Preview installs side by side with IE8/7 etc. It is not a beta nor is it intended to be a full browser. It is a … preview :-)   Including good old SVG-oids :-) Learn And then check out the following webcasts which were recorded in March this year at MIX: In-Depth Look At Internet Explorer 9 Presenter:  Ted Johnson & John Hrvatin VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL28 Slides: Download Videos: MP4 Small WMV Large WMV High Performance Best Practices For Web Sites Presenter: Jason Weber VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL29 Slides: Download Videos: MP4 Small WMV Large WMV HTML5: Cross Browser Best Practices Presenter: Tony Ross VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL27 Slides: Download Videos: MP4 Small WMV Large WMV Internet Explorer Developer Tools Presenter: Jon Seitel VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/FT51 Slides: Download Videos: MP4 Small WMV Large WMV SVG: The Past, Present And Future of Vector Graphics For The Web Presenter: Patrick Dengler, Doug Schepers VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/EX30 Slides: Download Videos: MP4 Small WMV Large WMV Day 2 Keynote containing IE9 Presenter: Dean Hachamovitch VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/KEY02 Slides: Download Videos: MP4 Small WMV Large WMV

    Read the article

  • CPU I/O communication part 2

    - by b-gen-jack-o-neill
    Hi, I was suggested when I have some further questions on my older ones, to create newer Question and reffer to old one. So, this is the original question: CPU I/O communication OK, I have just 2 more questions. 1, How is MMIO set? I mean, lets say I have developed some extension card with PCI interface. How do you say to memory controller to redirect desired memory space to my card´s RAM? I mean, I think standart text-mode VGA MMIO adress is 0x8000. Is this HW set (every motherboard automatically redirects 0x8000 adress to PCI-E) or its just standart and can be changed (VGA says to motherboard: give me memory mapped IO at 0x8000) 2, Can one expansion card have more MMIO connections? I mean, lets say I have card with 10 Bytes of RAM. Can Bytes 0-4 be mapped to 0x0 and bytes 5-9 to 0x5000? Please, if you know about some good article about this, please post link. Thanks.

    Read the article

  • Retrieve the full ASP.NET Form Buffer as a String

    - by Rick Strahl
    Did it again today: For logging purposes I needed to capture the full Request.Form data as a string and while it’s pretty easy to retrieve the buffer it always takes me a few minutes to remember how to do it. So I finally wrote a small helper function to accomplish this since this comes up rather frequently especially in debugging scenarios or in the immediate window. Here’s the quick function to get the form buffer as string: /// <summary> /// Returns the content of the POST buffer as string /// </summary> /// <returns></returns> public static string FormBufferToString() { HttpRequest Request = HttpContext.Current.Request; if (Request.TotalBytes > 0) return Encoding.Default.GetString(Request.BinaryRead(Request.TotalBytes)); return string.Empty; } Clearly a simple task, but handy to have in your library for reuse. You probably don’t want to call this if you have a massive inbound form buffer, or if the data you’re retrieving is binary. It’s probably a good idea to check the inbound content type before calling this function with something like this: var formBuffer = string.Empty; if (Request.ContentType.StartsWith("text/") || Request.ContentType == "application/x-www-form-urlencoded") ) { formBuffer = FormBufferToString(); } to ensure you’re working only on content types you can actually view as text. Now if I can only remember the name of this function in my library – it’s part of the static WebUtils class in the West Wind Web Toolkit if you want to check out a number of other useful Web helper functions.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  

    Read the article

  • Architecture behind live streaming [on hold]

    - by l19
    I'm a Comp Sci undergraduate student, and I'm currently trying to understand the architecture behind streaming. I hear several terms and I'm not quite sure how they are related (e.g. streaming, broadcasting, ingesting, etc.) Is there a blog post or book that explains: How it all works in a high-level view (the workflow) The architecture (i.e. I capture content using my camera and want to display it real-time to an audience. I imagine that the content will be transferred to a server, but how does that server transmit the information to several users simultaneously?) Thanks!

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 6

    - by Max
    In this post, we are going to look into implementing lists into our twitter application and also about enhancing the data grid to display the status messages in a pleasing way with the profile images. Twitter lists are really cool feature that they recently added, I love them and I’ve quite a few lists setup one for DOTNET gurus, SQL Server gurus and one for a few celebrities. You can follow them here. Now let us move onto our tutorial. 1) Lists can be subscribed to in two ways, one can be user’s own lists, which he has created and another one is the lists that the user is following. Like for example, I’ve created 3 lists myself and I am following 1 other lists created by another user. Both of them cannot be fetched in the same api call, its a two step process. 2) In the TwitterCredentialsSubmit method we’ve in Home.xaml.cs, let us do the first api call to get the lists that the user has created. For this the call has to be made to https://twitter.com/<TwitterUsername>/lists.xml. The API reference is available here. myService1.AllowReadStreamBuffering = true; myService1.UseDefaultCredentials = false; myService1.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService1.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsRequestCompleted); myService1.DownloadStringAsync(new Uri("https://twitter.com/" + GlobalVariable.getUserName() + "/lists.xml")); 3) Now let us look at implementing the event handler – ListRequestCompleted for this. public void ListsRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { StatusMessage.Text = "This application must be installed first."; parseXML(""); } else { //MessageBox.Show(e.Result.ToString()); parseXMLLists(e.Result.ToString()); } } 4) Now let us look at the parseXMLLists in detail xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.MinWidth = 70; b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } So here what am I doing, I am just dynamically creating a button for each of the lists and put them within a StackPanel and for each of these buttons, I am creating a event handler b_Click which will be fired on button click. We will look into this method in detail soon. For now let us get the buttons displayed. 5) Now the user might have some lists to which he has subscribed to. We need to create a button for these as well. In the end of TwitterCredentialsSubmit method, we need to make a call to http://api.twitter.com/1/<TwitterUsername>/lists/subscriptions.xml. Reference is available here. The code will look like this below. myService2.AllowReadStreamBuffering = true; myService2.UseDefaultCredentials = false; myService2.Credentials = new NetworkCredential(GlobalVariable.getUserName(), GlobalVariable.getPassword()); myService2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(ListsSubsRequestCompleted); myService2.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/subscriptions.xml")); 6) In the event handler – ListsSubsRequestCompleted, we need to parse through the xml string and create a button for each of the lists subscribed, let us see how. I’ve taken only the “full_name”, you can choose what you want, refer the documentation here. Note the point that the full_name will have @<UserName>/<ListName> format – this will be useful very soon. xdoc = XDocument.Parse(text); var answer = (from status in xdoc.Descendants("list") select status.Element("full_name").Value); foreach (var p in answer) { Border bord = new Border(); bord.CornerRadius = new CornerRadius(10, 10, 10, 10); Button b = new Button(); b.Background = new SolidColorBrush(Colors.Black); b.Foreground = new SolidColorBrush(Colors.Black); //b.Width = 70; b.MinWidth = 70; b.Height = 25; b.Click += new RoutedEventHandler(b_Click); b.Content = p.ToString(); bord.Child = b; TwitterListStack.Children.Add(bord); } Please note, I am setting the button width to be auto based on the content and also giving it a midwidth value. I wanted to create a rounded corner buttons, but for some reason its not working. Also add this StackPanel – TwitterListStack of the Home.xaml <StackPanel HorizontalAlignment="Center" Orientation="Horizontal" Name="TwitterListStack"></StackPanel> After doing this, you would get a series of buttons in the top of the home page. 7) Now the button click event handler – b_Click, in this method, once the button is clicked, I call another method with the content string of the button which is clicked as the parameter. Button b = (Button)e.OriginalSource; getListStatuses(b.Content.ToString()); 8) Now the getListsStatuses method: toggleProgressBar(true); WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted); if (listName.IndexOf("@") > -1 && listName.IndexOf("/") > -1) { string[] arrays = null; arrays = listName.Split('/'); arrays[0] = arrays[0].Replace("@", " ").Trim(); //MessageBox.Show(arrays[0]); //MessageBox.Show(arrays[1]); string url = "http://api.twitter.com/1/" + arrays[0] + "/lists/" + arrays[1] + "/statuses.xml"; //MessageBox.Show(url); myService.DownloadStringAsync(new Uri(url)); } else myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/" + GlobalVariable.getUserName() + "/lists/" + listName + "/statuses.xml")); Please note that the url to look at will be different based on the list clicked – if its user created, the url format will be http://api.twitter.com/1/<CurentUser>/lists/<ListName>/statuses.xml But if it is some lists subscribed, it will be http://api.twitter.com/1/<ListOwnerUserName>/lists/<ListName>/statuses.xml The first one is pretty straight forward to implement, but if its a list subscribed, we need to split the listName string to get the list owner and list name and user them to form the string. So that is what I’ve done in this method, if the listName has got “@” and “/” I build the url differently. 9) Until now, we’ve been using only a few nodes of the status message xml string, now we will look to fetch a new field - “profile_image_url”. Images in datagrid – COOL. So for that, we need to modify our Status.cs file to include two more fields one string another BitmapImage with get and set. public string profile_image_url { get; set; } public BitmapImage profileImage { get; set; } 10) Now let us change the generic parseXML method which is used for binding to the datagrid. public void parseXML(string text) { XDocument xdoc; xdoc = XDocument.Parse(text); statusList = new List<Status>(); statusList = (from status in xdoc.Descendants("status") select new Status { ID = status.Element("id").Value, Text = status.Element("text").Value, Source = status.Element("source").Value, UserID = status.Element("user").Element("id").Value, UserName = status.Element("user").Element("screen_name").Value, profile_image_url = status.Element("user").Element("profile_image_url").Value, profileImage = new BitmapImage(new Uri(status.Element("user").Element("profile_image_url").Value)) }).ToList(); DataGridStatus.ItemsSource = statusList; StatusMessage.Text = "Datagrid refreshed."; toggleProgressBar(false); } We are here creating a new bitmap image from the image url and creating a new Status object for every status and binding them to the data grid. Refer to the Twitter API documentation here. You can choose any column you want. 11) Until now, we’ve been using the auto generate columns for the data grid, but if you want it to be really cool, you need to define the columns with templates, etc… <data:DataGrid AutoGenerateColumns="False" Name="DataGridStatus" Height="Auto" MinWidth="400"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Width="50" Header=""> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding profileImage}" Width="50" Height="50" Margin="1"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> <data:DataGridTextColumn Width="Auto" Header="User Name" Binding="{Binding UserName}" /> <data:DataGridTemplateColumn MinWidth="300" Width="Auto" Header="Status"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock TextWrapping="Wrap" Text="{Binding Text}"/> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> I’ve used only three columns – Profile image, Username, Status text. Now our Datagrid will look super cool like this. Coincidentally,  Tim Heuer is on the screenshot , who is a Silverlight Guru and works on SL team in Microsoft. His blog is really super. Here is the zipped file for all the xaml, xaml.cs & class files pages. Ok let us stop here for now, will look into implementing few more features in the next few posts and then I am going to look into developing a ASP.NET MVC 2 application. Hope you all liked this post. If you have any queries / suggestions feel free to comment below or contact me. Cheers! Technorati Tags: Silverlight,LINQ,Twitter API,Twitter,Silverlight 4

    Read the article

  • Oracle(R) Buys Pre-Paid Software Assets From eServGlobal

    - by Paulo Folgado
    Oracle to Deliver Scalable Carrier-Grade Pre-Paid Solution Based on Open, Flexible IT-Based Platform News Facts ·        Oracle has agreed to acquire certain pre-paid assets of eServGlobal, a provider of advanced IT-based, pre-paid charging solutions for the communications industry. ·        eServGlobal's Universal Service Platform (USP) includes a pre-paid charging application, a network-services platform and a messaging gateway. The ChargingMax, NumberMax, uVOMS, MessageMax, PromoMax Express and Social Relationship Management software currently supports more than 25 tier-one customers including the world's largest IT-based installation of pre-paid services. ·        The combination of Oracle Communications Billing and Revenue Management and the USP applications is expected to accelerate the shift from network- to IT-based pre-paid systems by providing the first convergent, open IT-based platform from a leading business software and hardware systems company. ·        Customers are expected to benefit from traditional carrier-grade, pre-paid service authorization with IT-grade flexibility that supports any service or network, is easier to deploy and maintain and delivers an overall lower total cost of ownership. ·        The transaction is expected to close in the second half of this year. Supporting Quote ·        "The majority of mobile phone users worldwide use pre-paid plans, and that number is growing exponentially. Oracle Communications applications combined with the pre-paid software assets from eServGlobal will provide our customers with highly available and scalable carrier-grade, pre-paid software on an open, convergent platform. This will enable our customers to deliver traditional pre-paid voice services and easily introduce hybrid pre-paid and post-paid plans with targeted pricing, promotions and service bundles that include voice, data and network services," said Liam Maxwell, vice president of products, Oracle Communications. Supporting Resources About Oracle and eServGlobal USP General Presentation FAQ

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • How to resolve SSPI context error without changing Service Account from MSSQL

    - by kockiren
    There is a issue while connecting from new Windows 8.1 Clients to SQL Server 2008 running on Windows Server 2008 R2. The SQL Service running under account Domain\mssqlservice on a machine thats works fine I get this output from setspn -l domain\mssqlservice C:\>setspn -l domain\mssqlservice Registrierte Dienstprinzipalnamen (SPN) für CN=MSSQLService,CN=Users,DC=domain, DC=local,DC=tld: MSSQLSvc/mssql.domain.local.tld:1433 MSSQLSvc/mssql.domain.local.tld MSSQLSERVER/mssql.domain.local.tld:1433 On a windows 8.1 machine that don't work I get this output: C:\>setspn -l domain\msssqlservice FindDomainForAccount: Fehler beim Aufrufen von DsGetDcNameWithAccountW mit dem R ückgabewert 0x0000054B. Konto kockiren wurde nicht gefunden. On this Post I found a solution but, I can't change the Service Account who runs the SQL Service. Some application need this service delegation. But how I can realize that it works on my Windows 8.1 Clients?

    Read the article

  • Can I pin a cmd/batch file to command prompt on the Windows 7 taskbar?

    - by eidylon
    Hey all, just wanted to ask quick if anyone knows how to do this. If not I'll post it on connect.microsoft.com as a suggestion. With Explorer pinned to the taskbar you can pin folders to Explorer, so they are in the jumplist for the explorer shortcut. I have a shortcut to CMD pinned to my taskbar and would like to be able to do the same concept, ... that is to be able to pin a bat/cmd file to the CMD shortcut, so in the CMD jumplist, I could have quick access to a couple batch files I use routinely. Anyone know a way to do this? I tried just dragging the batch file to the CMD shortcut on the taskbar like how you do with folders and Explorer, but that did not work.

    Read the article

  • C#.NET (AForge) against Java (JavaCV, JMF) for video processing

    - by Leron
    I'm starting to get really confused looking deeper and deeper at video processing world and searching for optimal choices. This is the reason to post this and some other questions to try and navigate myself in the best possible way. I really like Java, working with Java, coding with Java, at the same time C# is not that different from Java and Visual Studio is maybe the best IDE I've been working with. So even though I really want to do my projects in Java so I can get better and better Java programmer at the same time I'm really attract to video processing and even though I'm still at the beginning of this journey I want to take the right path. So I'm really in doubt could Java be used in a production environment for serious video processing software. As the title says I already have been looking at maybe the two most used technologies for video processing in Java - JMF and JavaCV and I'm starting to think that even they are used and they provide some functionality, when it comes to real work and real project that's not the first thing that comes to once mind, I mean to someone that have a professional opinion about this. On the other hand I haven't got the time to investigate .NET (c# specificly) options but even AForge looks a lot more serious library then those provided for Java. So in general -either ways I'm gonna spend a lot of time learning some technology and trying to do something that make sense with it, but my plan is at the end the thing that I'll eventually come up to be my headline project. To represent my skills and eventually help me find a job in the field. So I really don't want to spend time learning something that will give me the programming result I want but at the same time is not something that is needed in the real world development. So what is your opinion, which language, technology is better for this specific issue. Which one worths more in terms that I specified above?

    Read the article

  • Are you a GPGPU developer? Participate in our UX study

    - by Daniel Moth
    You know that I work on the parallel debugger in Visual Studio and I've talked about GPGPU before and I have also mentioned UX. Below is a request from my UX colleagues that pulls all of it together. If you write and debug parallel code that uses GPUs for non-graphical, computationally intensive operations keep reading. The Microsoft Visual Studio Parallel Computing team is seeking developers for a 90-minute research study. The study will take place via LiveMeeting or at a usability lab in Redmond, depending on your preference. We will walk you through an example of debugging GPGPU code in Visual Studio with you giving us step-by-step feedback. ("Is this what you would you expect?", "Are we showing you the things that would help you?", "How would you improve this") The walkthrough utilizes a “paper” version of our current design. After the walkthrough, we would then show you some additional design ideas and seek your input on various design tradeoffs. Are you interested or know someone who might be a good fit? Let us know at this address: [email protected]. Those who participate (and those who referred them), will receive a gratuity item from a list of current Microsoft products. Comments about this post welcome at the original blog.

    Read the article

  • dpkg behaving strangely?

    - by Tom Henderson
    When I use apt to get a package, I have been receiving the same error message. Here is an example trying to install wicd (which is already installed): Reading package lists... Building dependency tree... Reading state information... wicd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 3 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up tex-common (2.06) ... debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend requires a screen at least 13 lines tall and 31 columns wide.) debconf: falling back to frontend: Readline Running mktexlsr. This may take some time... done. No packages found matching texlive-base. dpkg: error processing tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of texlive-binaries: texlive-binaries depends on tex-common (>= 2.00); however: Package tex-common is not configured yet. dpkg: error processing texlive-binaries (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of dvipng: dvipng depends on texlive-base-bin; however: Package texlive-base-bin is not installed. Package texlive-binaries which provides texlive-base-bin is not configured yet. dpkg: error processing dvipng (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: tex-common texlive-binaries dvipng E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure if this is a problem with apt or with dpkg, but it certainly doesn't look good!

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >