Search Results

Search found 4054 results on 163 pages for 'surround sound'.

Page 157/163 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • How do I evaluate my skillset against the current market to see what needs improvement and where my

    - by baijajusav
    First of all, this question may be out of bounds for this site. If so, remove it. I say this because this site seems to be a place for more concrete questions that are not so relative in nature. And before I begin, for those of you whom just prefer a question and not this sort of dialog, here is my question: How can I assess my current skills as a programmer and decide where and what areas to improve upon? That said, here's what I'm asking/talking about, in essence. The market is always in constant flux. As programmers we're always having to learn new things, update our skills, push ourselves into that next project. There's not a very good litmus test that I know of for us to get an idea of where we stand as programmers. I came across this blog post by Jeff Atwood talking about why can't programmers code. Instinctively (and as the post goes on to state) I rushed through the program in about 4 minutes (most of that time was b/c I was hand writing it out. Still, this doesn't really answer the question of where do my skills need to be to succeed in today's world. I real blogs, listen to podcasts, try to keep up on the latest things coming out. It has only been in the past couple of months that I made a decision to pick a focus area for my learning as I can't learn everything and trying to do so is to spread myself too thin. I chose ASP.NET MVC & C#. I plan to stick with Microsoft technologies, not out of some sense of loyalty or stubbornness, but rather because they seem to stream together and have a unifying connection between them. With Windows Phone 7 coming out, it seems that now is the obvious time to pick up WPF and Silverlight as well. Still, if you asked me to code something apart from intellisense and the internet, I probably couldn't get the syntax right. I don't have libraries memorized or know precisely where the classes I use exist within the .Net framework, namely because I haven't had to pull that knowledge out of the air. In a way, I suppose Visual Studio has insulated me, which isn't a good thing, but, at the same time, I've still been able to be productive. I'm working on my own side project to try and help my learning. In doing so, I'm trying to make use of best practices and 3rd party frameworks where I can. I'm using automapper and EF 1.0. I know everyone in the .net community seems to cry foul at the sound of EF 1.0, but I can't say why because I've never used it. There's no lazy loading and that has proven rather annoying; however, aside from that, I haven't had that much of an issue. Granted this is probably because I'm not writing tests as I go (which I'm not doing because I don't know how to test EF in tests and don't really have a clue how to write tests for ASP.NET MVC 1.0). I'm also using a custom membership provider; granted, it's a barebone implementation, but I'm using it still. My thinking in all of this is, while I am neglecting a great many important technologies that are in the mainstream, I'll have a working project in the end. I can come back and add those things after I finish. Doing it all now and at once seems like too much. I know how I work and I don't think I'd ever get it done that way. I've elected to make this a community wiki as I think this question might fight better there. If a moderator disagrees with that choice or the decision to post this here, the just delete the question. I'm not trying to make undue work for anyone. I'm just a programmer trying to assess my where his skills are now and where I should be improving.

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • How do I handle calls to AudioTrack from jni without crashing?

    - by icecream
    I was trying to write to an AudioTrack from a jni callback, and I get a signal 7 (SIGBUS), fault addr 00000000. I have looked at the Wolf3D example for odroid and they seem to use a android.os.Handler to post a Runnable that will do an update in the correct thread context. I have also tried AttachCurrentThread, but I fail in this case also. It works to play the sound when running from the constructor even if I wrap it in a thread and then post it to the handler. When I do the "same" via a callback from jni it fails. I assume I am braeaking some rules, but I haven't been able to figure out what they are. So far, I haven't found the answer here on SO. So I wonder if anyone knows how this should be done. EDIT: Answered below. The following code is to illustrate the problem. Java: package com.example.jniaudiotrack; import android.app.Activity; import android.media.AudioFormat; import android.media.AudioManager; import android.media.AudioTrack; import android.os.Bundle; import android.os.Handler; import android.util.Log; public class JniAudioTrackActivity extends Activity { AudioTrack mAudioTrack; byte[] mArr; public static final Handler mHandler = new Handler(); /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mArr = new byte[2048]; for (int i = 0; i < 2048; i++) { mArr[i] = (byte) (Math.sin(i) * 128); } mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT, 2048, AudioTrack.MODE_STREAM); mAudioTrack.play(); new Thread(new Runnable() { public void run() { mHandler.post(new Runnable() { public void run() { mAudioTrack.write(mArr, 0, 2048); Log.i(TAG, "*** Handler from constructor ***"); } }); } }).start(); new Thread(new Runnable() { public void run() { audioFunc(); } }).start(); } public native void audioFunc(); @SuppressWarnings("unused") private void audioCB() { mHandler.post(new Runnable() { public void run() { mAudioTrack.write(mArr, 0, 2048); Log.i(TAG, "*** audioCB called ***"); } }); } private static final String TAG = "JniAudioTrackActivity"; static { System.loadLibrary("jni_audiotrack"); } } cpp: #include <jni.h> extern "C" { JNIEXPORT void Java_com_example_jniaudiotrack_JniAudioTrackActivity_audioFunc(JNIEnv* env, jobject obj); } JNIEXPORT void Java_com_example_jniaudiotrack_JniAudioTrackActivity_audioFunc(JNIEnv* env, jobject obj) { JNIEnv* jniEnv; JavaVM* vm; env->GetJavaVM(&vm); vm->AttachCurrentThread(&jniEnv, 0); jclass cls = env->GetObjectClass(obj); jmethodID audioCBID = env->GetMethodID(cls, "audioCB", "()V"); if (!audioCBID) { return; } env->CallVoidMethod(cls, audioCBID); } Trace snippet: I/DEBUG ( 1653): pid: 9811, tid: 9811 >>> com.example.jniaudiotrack <<< I/DEBUG ( 1653): signal 7 (SIGBUS), fault addr 00000000 I/DEBUG ( 1653): r0 00000800 r1 00000026 r2 00000001 r3 00000000 I/DEBUG ( 1653): r4 42385726 r5 41049e54 r6 bee25570 r7 ad00e540 I/DEBUG ( 1653): r8 000040f8 r9 41048200 10 41049e44 fp 00000000 I/DEBUG ( 1653): ip 000000f8 sp bee25530 lr ad02dbb5 pc ad012358 cpsr 20000010 I/DEBUG ( 1653): #00 pc 00012358 /system/lib/libdvm.so

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • Resizing Images In a Table Using JavaScript

    - by Abluescarab
    Hey, all. I apologize beforehand if none of this makes sense, or if I sound incompetent... I've been making webpages for a while, but don't know any JavaScript. I am trying to make a webpage for a friend, and he requested if I could write some code to resize the images in the page based on the user's screen resolution. I did some research on this question, and it's kind of similar to this, this, and this. Unfortunately, those articles didn't answer my question entirely because none of the methods described worked. Right now, I have a three-column webpage with 10 images in a table in the left sidebar, and when I use percentages for their sizes, they don't resize right between monitors. As such, I'm trying to write a JavaScript function that changes their sizes after detecting the screen resolution. The code is stripped from my post if I put it all, so I'll just say that each image links to a website and uses a separate image to change color when you hover over it. Would I have to address both images to change their sizes correctly? I use a JavaScript function to switch between them. Anyway, I tried both methods in this article and neither worked for me. If it helps, I'm using Google Chrome, but I'm trying to make this page as cross-browser as possible. Here's the code I have so far in my JavaScript function: function resizeImages() { var w = window.width; var h = window.height; var yuk = document.getElementById('yuk').style; var wb = document.getElementById('wb').style; var tf = document.getElementById('tf').style; var lh = document.getElementById('lh').style; var ko = document.getElementById('ko').style; var gz = document.getElementById('gz').style; var fb = document.getElementById('fb').style; var eg = document.getElementById('eg').style; var dl = document.getElementById('dl').style; var da = document.getElementById('da').style; if (w = "800" && h = "600") { } else if (w = "1024" && h = "768") { } else if (w = "1152" && h = "864") { } else if (w = "1280" && h = "720") { } else if (w = "1280" && h = "768") { } else if (w = "1280" && h = "800") { } else if (w = "1280" && h = "960") { } else if (w = "1280" && h = "1024") { } } Yeah, I don't have much in it because I don't know if I'm doing it right yet. Is this a way I can detect the "width" and "height" properties of a window? The "yuk", "wb", etcetera are the images I'm trying to change the size of. To sum it up: I want to resize images based on screen resolution using JavaScript, but my research attempts have been... futile. I'm sorry if that was long-winded, but thanks ahead of time!

    Read the article

  • Load balancing using Mina example with Java DSL

    - by Flame_Phoenix
    So, recently I started learning Camel. As part of the process I decided to go through all the examples (listed HERE and available when you DOWNLOAD the package with all the examples and docs) and to see what I could learn. One of the examples, Load Balancing using Mina caught my attention because it uses a Mina in different JVM's and it simulates a load balancer with round robin. I have a few problems with this example. First it uses the Spring DSL, instead of the Java DSL which my project uses and which I find a lot easier to understand now (mainly also because I am used to it). So the first question: is there a version of this example using only the Java DSL instead of the Spring DSL for the routes and the beans? My second questions is code related. The description states, and I quote: Within this demo every ten seconds, a Report object is created from the Camel load balancer server. This object is sent by the Camel load balancer to a MINA server where the object is then serialized. One of the two MINA servers (localhost:9991 and localhost:9992) receives the object and enriches the message by setting the field reply of the Report object. The reply is sent back by the MINA server to the client, which then logs the reply on the console. So, from what I read, I understand that the MINA server 1 (per example) receives a report from the loadbalancer, changes it, and then it sends that report back to some invisible client. Upon checking the code, I see no client java class or XML and when I run, the server simply posts the results on the command line. Where is the client ?? What is this client? In the MINA 1server code presented here: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/spring" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="service" class="org.apache.camel.example.service.Reporting"/> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="mina1"> <from uri="mina:tcp://localhost:9991"/> <setHeader headerName="minaServer"> <constant>localhost:9991</constant> </setHeader> <bean ref="service" method="updateReport"/> </route> </camelContext> </beans> I don't understand how the updateReport method magically prints the object on my console. What if I wanted to send message to a third MINA server? How would I do it? (I would have to add a new route, and send it to the URI of the 3rd server correct?) I know most of these questions may sound dumb, but I would appreciate if anyone could help me. A Java DSL version of this would really help me.

    Read the article

  • Survey Data Model - How to avoid EAV and excessive denormalization?

    - by AlexDPC
    Hi everyone, My database skills are mediocre at best and I have to design a data model for survey data. I have spent some thoughts on this and right now I feel that I am stuck between some kind of EAV model and a design involving hundreds of tables, each with hundreds of columns (and thousands of records). There must be a better way to do this and I hope that the wise folks on this forum can help me. I have already searched various forums, but I couldn't really find a solution. If it has already been given elsewhere, please excuse me and provide me with a link so I can read it up. Some assumptions about the data I have to deal with: Each survey consists of 1 to n questionnaires Each questionnaire consists of 100-2,000 questions (please ignore that 2,000 questions really sound like a lot to answer...) Questions can be of various types: multiple-choice, free text, a number (like age, income, percentages, ...) Each survey involves 10-200 countries (These are not the respondents. The respondents are actually people in the countries.) Depending on the type of questionnaire, each questionnaire is answered by 100-20,000 respondents per country. A country can adapt the questionnaires for a survey, i.e. add, remove or edit questions The data for one country is gathered in a separate database in that country. There is no possibility for online integration from the start. The data for all countries has to be integrated later. This means for example, if a country has deleted a question, that data must somehow be derived from what they sent in order to achieve a uniform design across all countries I will have to write the integration and cleaning software, which will need to work with every country's data In the end the data needs to be exported to flat files, one rectangular grid per country and questionnaire. I have already discussed this topic with people from various backgrounds and have not come to a good solution yet. I mainly got two kinds of opinions. The domain experts, who are used to working with flat files (spreadsheet-style) for data processing and analysis vote for a denormalized structure with loads of tables and columns as I described above (1 table per country and questionnaire). This sounds terrible to me, because I learned that wide tables are to be avoided, it will be annoying to determine which columns are actually in a table when working with it, the database will become cluttered with hundreds of tables (or I even need to set up multiple databases, each with a similar yet a bit differetn design), etc. O-O-programmers vote for a strongly "normalized" design, which would effectively lead to a central table containing all the answers from all respondents to all questions. This table would either need to contain a column of type sql_variant type or multiple answer columns with different types to store answers of different types (multiple choice, free text, ..). The former would essentially be a EAV model. I tend to follow Joe Celko here, who strongly discourages its use (he calls it OTLT or "One True Lookup Table"). The latter would imply that each row would contain null cells for the not applicable types by design. Another alternative I could think of would be to create one table per answer type, i.e., one for multiple-choice questions, one for free text questions, etc.. That's not so generic, it would lead to a lot of union joins, I think and I would have to add a table if a new answer type is invented. Sorry for boring you with all this text and thank you for your input! Cheers, Alex PS: I asked the same question here: http://www.eggheadcafe.com/community/aspnet/13/10242616/survey-data-model--how-to-avoid-eav-and-excessive-denormalization.aspx

    Read the article

  • Screen Casting using ffmpeg (too fast)

    - by rowman
    I can use ffmpeg to make screen casts: ffmpeg -f x11grab -s 1280x800 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv However the output comes out to be too fast paced. It also happens with GTK RecordMyDesktop if I enable the encode on the fly. So, the questions is how to get a normal video pace. Also in order to capture the sound with ffmpeg what option should be used? FFmpeg Output: ffmpeg -f x11grab -s 1280x800 -r 30 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv ffmpeg version N-35162-g87244c8 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 7 2012 15:56:19 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --enable-gpl --enable-libfaac --enable-libfdk-aac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-x11grab --enable-libx264 --enable-nonfree --enable-version3 libavutil 51. 73.102 / 51. 73.102 libavcodec 54. 64.100 / 54. 64.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 [x11grab @ 0xab896a0] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 1280 height: 800 [x11grab @ 0xab896a0] shared memory extension found [x11grab @ 0xab896a0] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0': Duration: N/A, start: 1350136942.608988, bitrate: 983040 kb/s Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1280x800, 983040 kb/s, 30 tbr, 1000k tbn, 30 tbc [libx264 @ 0xab87320] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 SlowCTZ SlowAtom [libx264 @ 0xab87320] profile High 4:4:4 Predictive, level 3.2, 4:4:4 8-bit [libx264 @ 0xab87320] 264 - core 128 r2 198a7ea - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, matroska, to 'out.mkv': Metadata: encoder : Lavf54.29.105 Stream #0:0: Video: h264, yuv444p, 1280x800, q=-1--1, 1k tbn, 30 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Press [q] to stop, [?] for help frame= 10 fps=0.0 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 19 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 28 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 37 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 45 fps= 16 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 47 fps= 14 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 52 fps= 13 q=24.0 size= 257kB time=00:00:00.00 bitrate=2101632.0kbiframe= 55 fps= 12 q=24.0 size= 257kB time=00:00:00.10 bitrate=20808.2kbitsframe= 59 fps= 11 q=24.0 size= 289kB time=00:00:00.23 bitrate=10145.0kbitsframe= 64 fps= 11 q=24.0 size= 289kB time=00:00:00.40 bitrate=5894.7kbits/frame= 70 fps= 11 q=24.0 size= 289kB time=00:00:00.60 bitrate=3933.1kbits/frame= 72 fps= 10 q=24.0 size= 289kB time=00:00:00.66 bitrate=3549.2kbits/frame= 77 fps=9.8 q=24.0 size= 289kB time=00:00:00.83 bitrate=2837.7kbits/frame= 80 fps=9.6 q=24.0 size= 289kB time=00:00:00.93 bitrate=2533.5kbits/frame= 85 fps=9.3 q=24.0 size= 289kB time=00:00:01.10 bitrate=2146.9kbits/frame= 89 fps=9.3 q=24.0 size= 289kB time=00:00:01.23 bitrate=1917.1kbits/frame= 92 fps=9.1 q=24.0 size= 289kB time=00:00:01.33 bitrate=1773.3kbits/frame= 96 fps=9.0 q=24.0 size= 289kB time=00:00:01.46 bitrate=1612.4kbits/frame= 99 fps=8.8 q=24.0 size= 321kB time=00:00:01.56 bitrate=1676.8kbits/frame= 104 fps=8.7 q=24.0 size= 321kB time=00:00:01.73 bitrate=1515.2kbits/frame= 109 fps=5.3 q=24.0 Lsize= 1093kB time=00:00:03.56 bitrate=2511.5kbits/s video:1092kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.120198% [libx264 @ 0xab87320] frame I:3 Avg QP:18.93 size:142610 [libx264 @ 0xab87320] frame P:43 Avg QP:20.79 size: 15751 [libx264 @ 0xab87320] frame B:63 Avg QP:23.75 size: 195 [libx264 @ 0xab87320] consecutive B-frames: 21.1% 1.8% 11.0% 66.1% [libx264 @ 0xab87320] mb I I16..4: 50.0% 21.1% 28.9% [libx264 @ 0xab87320] mb P I16..4: 6.1% 0.9% 3.2% P16..4: 5.5% 1.2% 0.6% 0.0% 0.0% skip:82.5% [libx264 @ 0xab87320] mb B I16..4: 0.4% 0.1% 0.0% B16..8: 2.9% 0.1% 0.0% direct: 0.0% skip:96.5% L0:40.7% L1:57.0% BI: 2.3% [libx264 @ 0xab87320] 8x8 transform intra:14.5% inter:46.1% [libx264 @ 0xab87320] coded y,u,v intra: 33.5% 24.1% 25.4% inter: 0.9% 0.4% 0.4% [libx264 @ 0xab87320] i16 v,h,dc,p: 70% 26% 1% 3% [libx264 @ 0xab87320] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 11% 21% 30% 5% 7% 5% 7% 4% 10% [libx264 @ 0xab87320] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 35% 12% 2% 4% 3% 4% 3% 5% [libx264 @ 0xab87320] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0xab87320] ref P L0: 57.0% 5.6% 26.8% 10.6% [libx264 @ 0xab87320] ref B L0: 69.4% 22.6% 8.0% [libx264 @ 0xab87320] ref B L1: 93.7% 6.3% [libx264 @ 0xab87320] kb/s:2460.40

    Read the article

  • Explicit method tables in C# instead of OO - good? bad?

    - by FunctorSalad
    Hi! I hope the title doesn't sound too subjective; I absolutely do not mean to start a debate on OO in general. I'd merely like to discuss the basic pros and cons for different ways of solving the following sort of problem. Let's take this minimal example: you want to express an abstract datatype T with functions that may take T as input, output, or both: f1 : Takes a T, returns an int f2 : Takes a string, returns a T f3 : Takes a T and a double, returns another T I'd like to avoid downcasting and any other dynamic typing. I'd also like to avoid mutation whenever possible. 1: Abstract-class-based attempt abstract class T { abstract int f1(); // We can't have abstract constructors, so the best we can do, as I see it, is: abstract void f2(string s); // The convention would be that you'd replace calls to the original f2 by invocation of the nullary constructor of the implementing type, followed by invocation of f2. f2 would need to have side-effects to be of any use. // f3 is a problem too: abstract T f3(double d); // This doesn't express that the return value is of the *same* type as the object whose method is invoked; it just expresses that the return value is *some* T. } 2: Parametric polymorphism and an auxilliary class (all implementing classes of TImpl will be singleton classes): abstract class TImpl<T> { abstract int f1(T t); abstract T f2(string s); abstract T f3(T t, double d); } We no longer express that some concrete type actually implements our original spec -- an implementation is simply a type Foo for which we happen to have an instance of TImpl. This doesn't seem to be a problem: If you want a function that works on arbitrary implementations, you just do something like: // Say we want to return a Bar given an arbitrary implementation of our abstract type Bar bar<T>(TImpl<T> ti, T t); At this point, one might as well skip inheritance and singletons altogether and use a 3 First-class function table class /* or struct, even */ TDictT<T> { readonly Func<T,int> f1; readonly Func<string,T> f2; readonly Func<T,double,T> f3; TDict( ... ) { this.f1 = f1; this.f2 = f2; this.f3 = f3; } } Bar bar<T>(TDict<T> td; T t); Though I don't see much practical difference between #2 and #3. Example Implementation class MyT { /* raw data structure goes here; this class needn't have any methods */ } // It doesn't matter where we put the following; could be a static method of MyT, or some static class collecting dictionaries static readonly TDict<MyT> MyTDict = new TDict<MyT>( (t) => /* body of f1 goes here */ , // f2 (s) => /* body of f2 goes here */, // f3 (t,d) => /* body of f3 goes here */ ); Thoughts? #3 is unidiomatic, but it seems rather safe and clean. One question is whether there are any performance concerns with it. I don't usually need dynamic dispatch, and I'd prefer if these function bodies get statically inlined in places where the concrete implementing type is known statically. Is #2 better in that regard?

    Read the article

  • Database Table Schema and Aggregate Roots

    - by bretddog
    Hi, Applicaiton is single user, 1-tier(1 pc), database SqlCE. DataService layer will be (I think) : Repository returning domain objects and quering database with LinqToSql (dbml). There are obviously a lot more columns, this is simplified view. http://img573.imageshack.us/img573/3612/ss20110115171817w.png This is my first attempt of creating a 2 tables database. I think the table schema makes sense, but I need some reassurance or critics. Because the table relations looks quite scary to be honest. I'm hoping you could; Look at the table schema and respond if there are clear signs of troubles or errors that you spot right away.. And if you have time, Look at Program Summary/Questions, and see if the table layout makes makes sense to those points. Please be brutal, I will try to defend :) Program summary: a) A set of categories, each having a set of strategies (1:m) b) Each day a number of items will be produced. And each strategy MAY reference it. (So there can be 50 items, and a strategy may reference 23 of them) c) An item can be referenced by more than one strategy. So I think it's an m:m relation. d) Status values will be logged at fixed time-fractions through the day, for: - .... each Strategy.....each StrategyItem....each item e) An action on an item may be executed by a strategy that reference it. - This is logged as ItemAction (Could have called it StrategyItemAction) User Requsts b) - e) described the main activity mode of the program. To work with only today's DayLog , for each category. 2nd priority activity is retrieval of history, which typically will be From all categories, from day x to day y; Get all StrategyDailyLog. Questions First, does the overall layout look sound? I'm worried to see that there are so many relationships in all directions, connecting everything. Is this normal, or does it look like trouble? StrategyItem is made to represent an m:m relationship. Is it correct as I noted 1:m / 1:1 (marked red) ? StrategyItemTimeLog and ItemTimeLog; Logs values that both need to be retrieved together, when retreiving a StrategyItem. Reason I separated is that the first one is strategy-specific, and several strategies can reference same item. So I thought not to duplicate those values that are not dependent no strategy, but only on the item. Hence I also dragged out the LogTime, as it seems to be the only parameter to unite the logs. But this all looks quite disturbing with those 3 tables. Does it make sense at all? Or you have suggestion? Pink circles shows my vague attempt of Aggregate Root Paths. I've been thinking in terms of "what entity is responsible for delete". Though I'm unsure about the actual root. I think it's Category. Does it make sense related to User Requests described above?

    Read the article

  • How to best propagate changes upwards a hierarchical structure for binding?

    - by H.B.
    If i have a folder-like structure that uses the composite design pattern and i bind the root folder to a TreeView. It would be quite useful if i can display certain properties that are being accumulated from the folder's contents. The question is, how do i best inform the folder that changes occurred in a child-element so that the accumulative properties get updated? The context in which i need this is a small RSS-FeedReader i am trying to make. This are the most important objects and aspects of my model: Composite interface: public interface IFeedComposite : INotifyPropertyChanged { string Title { get; set; } int UnreadFeedItemsCount { get; } ObservableCollection<FeedItem> FeedItems { get; } } FeedComposite (aka Folder) public class FeedComposite : BindableObject, IFeedComposite { private string title = ""; public string Title { get { return title; } set { title = value; NotifyPropertyChanged("Title"); } } private ObservableCollection<IFeedComposite> children = new ObservableCollection<IFeedComposite>(); public ObservableCollection<IFeedComposite> Children { get { return children; } set { children.Clear(); foreach (IFeedComposite item in value) { children.Add(item); } NotifyPropertyChanged("Children"); } } public FeedComposite() { } public FeedComposite(string title) { Title = title; } public ObservableCollection<FeedItem> FeedItems { get { ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); foreach (IFeedComposite child in Children) { foreach (FeedItem item in child.FeedItems) { feedItems.Add(item); } } return feedItems; } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } Feed: public class Feed : BindableObject, IFeedComposite { private string url = ""; public string Url { get { return url; } set { url = value; NotifyPropertyChanged("Url"); } } ... private ObservableCollection<FeedItem> feedItems = new ObservableCollection<FeedItem>(); public ObservableCollection<FeedItem> FeedItems { get { return feedItems; } set { feedItems.Clear(); foreach (FeedItem item in value) { AddFeedItem(item); } NotifyPropertyChanged("Items"); } } public int UnreadFeedItemsCount { get { return (from i in FeedItems where i.IsUnread select i).Count(); } } public Feed() { } public Feed(string url) { Url = url; } Ok, so here's the thing, if i bind a TextBlock.Text to the UnreadFeedItemsCount there won't be simple notifications when an item is marked unread, so one of my approaches has been to handle the PropertyChanged event of every FeedItem and if the IsUnread-Property is changed i have my Feed make a notification that the property UnreadFeedItemsCount has been changed. With this approach i also need to handle all PropertyChanged events of all Feeds and FeedComposites in Children of FeedComposite, from the sound of it, it should be obvious that this is not such a very good idea, you need to be very careful that items never get added or removed to any collection without having attached the PropertyChanged event handler first and things like that. Also: What do i do with the CollectionChanged-Events which necessarily also cause a change in the sum of the unread items count? Sounds like more event handling fun. It is such a mess, it would be great if anyone has an elegant solution to this since i don't want the feed-reader to end up as awful as my first attempt years ago when i didn't even know about DataBinding...

    Read the article

  • Defend PHP; convince me it isn't horrible

    - by Jason L
    I made a tongue-in-cheek comment in another question thread calling PHP a terrible language and it got down-voted like crazy. Apparently there are lots of people here who love PHP. So I'm genuinely curious. What am I missing? What makes PHP a good language? Here are my reasons for disliking it: PHP has inconsistent naming of built-in and library functions. Predictable naming patterns are important in any design. PHP has inconsistent parameter ordering of built-in functions, eg array_map vs. array_filter which is annoying in the simple cases and raises all sorts of unexpected behaviour or worse. The PHP developers constantly deprecate built-in functions and lower-level functionality. A good example is when they deprecated pass-by-reference for functions. This created a nightmare for anyone doing, say, function callbacks. A lack of consideration in redesign. The above deprecation eliminated the ability to, in many cases, provide default keyword values for functions. They fixed this in PHP 5, but they deprecated the pass-by-reference in PHP 4! Poor execution of name spaces (formerly no name spaces at all). Now that name spaces exist, what do we use as the dereference character? Backslash! The character used universally for escaping, even in PHP! Overly-broad implicit type conversion leads to bugs. I have no problem with implicit conversions of, say, float to integer or back again. But PHP (last I checked) will happily attempt to magically convert an array to an integer. Poor recursion performance. Recursion is a fundamentally important tool for writing in any language; it can make complex algorithms far simpler. Poor support is inexcusable. Functions are case insensitive. I have no idea what they were thinking on this one. A programming language is a way to specify behavior to both a computer and a reader of the code without ambiguity. Case insensitivity introduces much ambiguity. PHP encourages (practically requires) a coupling of processing with presentation. Yes, you can write PHP that doesn't do so, but it's actually easier to write code in the incorrect (from a sound design perspective) manner. PHP performance is abysmal without caching. Does anyone sell a commercial caching product for PHP? Oh, look, the designers of PHP do. Worst of all, PHP convinces people that designing web applications is easy. And it does indeed make much of the effort involved much easier. But the fact is, designing a web application that is both secure and efficient is a very difficult task. By convincing so many to take up programming, PHP has taught an entire subgroup of programmers bad habits and bad design. It's given them access to capabilities that they lack the understanding to use safely. This has led to PHP's reputation as being insecure. (However, I will readily admit that PHP is no more or less secure than any other web programming language.) What is it that I'm missing about PHP? I'm seeing an organically-grown, poorly-managed mess of a language that's spawning poor programmers. So convince me otherwise!

    Read the article

  • AsyncTask, inner classes and Buttons states

    - by Intern John Smith
    For a musical application (a sequencer), I have a few buttons static ArrayList<Button> Buttonlist = new ArrayList<Button>(); Buttonlist.add(0,(Button) findViewById(R.id.case_kick1)); Buttonlist.add(1,(Button) findViewById(R.id.case_kick2)); Buttonlist.add(2,(Button) findViewById(R.id.case_kick3)); Buttonlist.add(3,(Button) findViewById(R.id.case_kick4)); Buttonlist.add(4,(Button) findViewById(R.id.case_kick5)); Buttonlist.add(5,(Button) findViewById(R.id.case_kick6)); Buttonlist.add(6,(Button) findViewById(R.id.case_kick7)); Buttonlist.add(7,(Button) findViewById(R.id.case_kick8)); that I activate through onClickListeners with for example Buttonlist.get(0).setActivated(true); I played the activated sound via a loop and ifs : if (Buttonlist.get(k).isActivated()) {mSoundManager.playSound(1); Thread.sleep(250);} The app played fine but I couldn't access the play/pause button when it was playing : I searched and found out about AsyncTasks. I have a nested class PlayPause : class PlayPause extends AsyncTask<ArrayList<Button>,Integer,Void> in which I have this : protected Void doInBackground(ArrayList<Button>... params) { for(int k=0;k<8;k++) { boolean isPlayed = false; if (Buttonlist.get(k).isActivated()) { mSoundManager.playSound(1); isPlayed = true; try {Thread.sleep(250); } catch (InterruptedException e) {e.printStackTrace();}} if(!isPlayed){ try {Thread.sleep(250);} catch (InterruptedException e) {e.printStackTrace();} } } return null; } I launch it via Play.setOnClickListener(new PlayClickListener()); with PlayClickListener : public class PlayClickListener implements OnClickListener { private Tutorial acti; public PlayClickListener(){ super(); } @SuppressWarnings("unchecked") public void onClick(View v) { if(Tutorial.play==0){ Tutorial.play=1; Tutorial.Play.setBackgroundResource(R.drawable.action_down); acti = new Tutorial(); Tutorial.PlayPause myPlayPause = acti.new PlayPause(); myLecture.execute(Tutorial.Buttonlist); } else { Tutorial.play=0; Tutorial.lecture.cancel(true); Tutorial.Play.setBackgroundResource(R.drawable.action); } } } But it doesn't work. when I click on buttons and I touch Play/Pause, I have this : 07-02 11:06:01.350: E/AndroidRuntime(7883): FATAL EXCEPTION: AsyncTask #1 07-02 11:06:01.350: E/AndroidRuntime(7883): Caused by: java.lang.NullPointerException 07-02 11:06:01.350: E/AndroidRuntime(7883): at com.Tutorial.Tutorial$PlayPause.doInBackground(Tutorial.java:603) 07-02 11:06:01.350: E/AndroidRuntime(7883): at com.Tutorial.Tutorial$PlayPause.doInBackground(Tutorial.java:1) And I don't know how to get rid of this error. Obviously, the Asynctask doesn't find the Buttonlist activated status, even if the list is static. I don't know how to access these buttons' states, isPressed doesn't work either. Thanks for reading and helping me !

    Read the article

  • a friendly elaboratation on how iceweasel misses as recent request was discovered in post tut

    - by v3x3n
    I would like to specify what you asked about in how iceweasel misses on in hopes that you are interested in considering the answer to be valid considering the package should run efficient to a novice user especially when getting a first impression on a new software, browser, or package they are given by default. First let me say that I am very grateful for all debian has and continues to do for its user base and I am not complaining, only hoping that in mentioning such it will eventually aid in improving the disappointing experience iceweasel gave me in multiple ways right from the start of using it. I am sure tremendous amounts of hard work went into making it work as well as it does, and tremendous effort continues to go into IW like the rest of the package and distros such as this... Having said that, I do aim to keep my examples simple as possible but also explained in enough detail as to not leave out important info to get a full idea of my issue... (Thanks for understanding if I sound novice to debian, I am but continue to learn and advance by trial and error each day, and of course the wonderful help from all the great minds available here and there)... Well, my bone to pick (unfortunately as I disdain being a complainer when so much has been done for us, the end user) with IW is mainly 3 immediately noticed problems in offering me a smooth test drive (completely normal browsing conduct of your average user, I should indicate) as soon as I started out a task using IW (heading directly to google images and searched for a few web sized images, then saved a few (no more than 800 to 900 in dimensions, typically a normal procedure I would imagine) I experienced a horrendous reoccurring freeze and lag time that almost left me to kill the process more than once on each attempt and continued to compound so that by a few attempts later, even 1 single image save locked up the entire browser, ultimately giving up on my procedure in its simplicity with a bit of aggravation, if there was something I had done wrong on my end pre-attempt or during installation. Secondly, visiting video player sites in this case youtube will not allow for any video feed to play whatsoever and again, experienced a slight freezing that eventually dissipated as long as I didn't attempt to press play or reload another video trial... Very annoying and possibly related to my first issue (which could rely on my own lack of knowledge in setup config, but that troubles me if so).... Lastly as another user has stated already, in which brought you to ask them what exactly seemed to be the problem in the first place is also my final straw that broke the weasel's back, leading me here to find a solution to a updated version of firefox... being that iceweasel (an apparent build of firefox 17.x.x) leaving the user to find themselves out of luck when installing their favorite or much useful add-ons or plugins that aid their task or normal usage online.... As the other user clearly stated... there is almost zero options and barely any luck to find compatible plugins with such an outdated version still being used... I can only imagine if as a novoice or standard user to linux... and beginner to debian (which I am all for btw!) if I am discovering major conductivity issues which hinder normal user conduct right off the bat... that there is probably a bundle or more of security vulnerabilities that would unravel to me if I continued to give mr Hommey's package any further attention, or interest in continuing to use... This is a deal breaker on any account, unless all 3 of these issues stated are due to a very n00berish setting or lack thereof (which if that is the case, such was not properly stated for beginner in any place available by following installation tutorials such as LVM installers, which I felt lacked a clear understanding to someone who is trying to learn why and how, rather than, a fix to a potentially broken out of box installation... I really think that is irrelevant in any case however, as there is much step by step support regarding that kind of thing, just saying... its very challenging, along side a job to keep deadlines on, and the need to learn a better and more costomizible system, that was not as popularized when I became a dev via windows (OK OK I know thats a huge mistake and now I am way off original topic but anyway) Thanks very much for taking the time to read over this, and I hope it honestly gives you a few good and valid reasons for users wanting and seeking a way to get around iceweasel altogether... Finally, if I am missing something vital that renders all I've stated invalid or useless, please feel free to shove that elephant in the room my direction! Thanks for all you do, as I am sure it is very useful and dedicated, being you are a debian maintainer, and super user! Much love for intelligence, freedom and sharing the secrets of the web! May it remain unregulated and a good place to grow and learn for generations to come! Stay true! Take care now! -v3x (v3x @ gmx .com)

    Read the article

  • Referencing variables in a structure / C++

    - by user1628622
    Below, I provided a minimal example of code I created. I managed to get this code working, but I'm not sure if the practice being employed is sound. In essence, what I am trying to do is have the 'Parameter' class reference select elements in the 'States' class, so variables in States can be changed via Parameters. Questions I have: is the approach taken OK? If not, is there a better way to achieve what I am aiming for? Example code: struct VAR_TYPE{ public: bool is_fixed; // If is_fixed = true, then variable is a parameter double value; // Numerical value std::string name; // Description of variable (to identify it by name) }; struct NODE{ public: VAR_TYPE X, Y, Z; /* VAR_TYPE is a structure of primitive types */ }; class States{ private: std::vector <NODE_ptr> node; // shared ptr to struct NODE std::vector <PROP_DICTIONARY_ptr> property; // CAN NOT be part of Parameter std::vector <ELEMENT_ptr> element; // CAN NOT be part of Parameter public: /* ect */ void set_X_reference ( Parameter &T , int i ) { T.push_var( &node[i]->X ); } void set_Y_reference ( Parameter &T , int i ) { T.push_var( &node[i]->Y ); } void set_Z_reference ( Parameter &T , int i ) { T.push_var( &node[i]->Z ); } bool get_node_bool_X( int i ) { return node[i]->X.is_fixed; } // repeat for Y and Z }; class Parameter{ private: std::vector <VAR_TYPE*> var; public: /* ect */ }; int main(){ States S; Parameter P; /* Here I initialize and set S, and do other stuff */ // Now I assign components in States to Parameters for(int n=0 ; n<S.size_of_nodes() ; n++ ){ if ( S.get_node_bool_X(n)==true ){ S.set_X_reference ( P , n ); }; // repeat if statement for Y and Z }; /* Now P points selected to data in S, and I can * modify the contents of S through P */ return 0; }; Update The reason this issue cropped up is I am working with Fortran legacy code. To sum up this Fotran code - it's a numerical simulation of a flight vehicle. This code has a fairly rigid procedural framework one must work within, which comes with a pre-defined list of allowable Fortran types. The Fortran glue code can create an instance of a C++ object (in actuality, a reference from the perspective of Fortran), but is not aware what is contained in it (other means are used to extract C++ data into Fortran). The problem that I encountered is when a C++ module is dynamically linked to the Fortran glue code, C++ objects have to be initialized each instance the C++ code is called. This happens by virtue of how the Fortran template is defined. To avoid this cycle of re-initializing objects, I plan to use 'State' as a container class. The Fortran code allows a 'State' object, which has an arbitrary definition; but I plan to use it to harness all relevant information about the model. The idea is to use the Parameters class (which is exposed and updated by the Fortran code) to update variables in States.

    Read the article

  • Opening Skype, Opera, OpenOffice logs me off

    - by anjanesh
    Whats common among Skype, Opera, OpenOffice in Ubuntu ? Whenever I open these applications I get logged off and shows back me the login screen. This started happening since the 10.10 upgrade. Forgot to mention : Yes, its x64.Each time I open these applications, the UI shows and then crashes. I started each app & logged the last few lines of /var/log/syslog after each crash. Looks like something to do with sound drivers ? Opera :Jan 8 09:33:20 al-ubuntu pulseaudio[11532]: pid.c: Daemon already running. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 8. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: snd_pcm_dump(): Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Soft volume PCM Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: min_dB: -51 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: max_dB: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: resolution: 256 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Its setup is: Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stream : CAPTURE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: format : S16_LE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: subformat : STD Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: channels : 2 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: rate : 44100 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: msbits : 16 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: buffer_size : 88192 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_size : 44096 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_time : 999909 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_step : 1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: avail_min : 87310 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_event : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: start_threshold : -1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_threshold: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_size : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Its setup is: Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stream : CAPTURE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: format : S16_LE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: subformat : STD Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: channels : 2 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: rate : 44100 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: msbits : 16 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: buffer_size : 88192 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_size : 44096 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_time : 999909 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_step : 1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: avail_min : 87310 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_event : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: start_threshold : -1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_threshold: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_size : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: appl_ptr : 87320 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: hw_ptr : 87320 Jan 8 09:33:22 al-ubuntu kernel: [ 4962.078306] opera[11036]: segfault at 261 ip 0000000000000261 sp 00007fffed7cd9a8 error 14 in opera[400000+122b000] anjanesh@al-ubuntu:~$ SkypeJan 8 09:40:21 al-ubuntu pulseaudio[12602]: pid.c: Daemon already running. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 8. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: snd_pcm_dump(): Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Soft volume PCM Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: min_dB: -51 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: max_dB: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: resolution: 256 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Its setup is: Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stream : CAPTURE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: format : S16_LE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: subformat : STD Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: channels : 2 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: rate : 44100 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: msbits : 16 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: buffer_size : 88192 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_size : 44096 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_time : 999909 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_step : 1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: avail_min : 87310 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_event : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: start_threshold : -1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_threshold: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_size : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Its setup is: Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stream : CAPTURE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: format : S16_LE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: subformat : STD Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: channels : 2 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: rate : 44100 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: msbits : 16 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: buffer_size : 88192 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_size : 44096 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_time : 999909 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_step : 1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: avail_min : 87310 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_event : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: start_threshold : -1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_threshold: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_size : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: appl_ptr : 87312 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: hw_ptr : 87312 anjanesh@al-ubuntu:~$ Open OfficeJan 8 09:43:46 al-ubuntu pulseaudio[13157]: pid.c: Daemon already running. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 16. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: snd_pcm_dump(): Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Soft volume PCM Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: min_dB: -51 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: max_dB: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: resolution: 256 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Its setup is: Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stream : CAPTURE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: format : S16_LE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: subformat : STD Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: channels : 2 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: rate : 44100 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: msbits : 16 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: buffer_size : 88192 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_size : 44096 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_time : 999909 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_step : 1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: avail_min : 87310 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_event : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: start_threshold : -1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_threshold: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_size : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Its setup is: Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stream : CAPTURE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: format : S16_LE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: subformat : STD Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: channels : 2 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: rate : 44100 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: msbits : 16 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: buffer_size : 88192 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_size : 44096 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_time : 999909 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_step : 1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: avail_min : 87310 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_event : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: start_threshold : -1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_threshold: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_size : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: appl_ptr : 87320 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: hw_ptr : 87320 anjanesh@al-ubuntu:~$

    Read the article

  • Turn Photos and Home Videos into Movies with Windows Live Movie Maker

    - by DigitalGeekery
    Are you looking for an easy way to take your digital photos and videos and turn them into a movie or slideshow? Today we’ll take a detailed look at how to do use Windows Live Movie Maker. Installation Windows Live Movie Maker comes bundled as part of the Windows Live Essentials suite (link below). However, you don’t have to install any of the programs you may not want. Take notice of the You’re almost done screen. Before clicking Continue, be sure to uncheck the boxes to set your search provider and homepage. Adding Pictures and Videos Open Windows Live Movie Maker. You can add videos or photos by simply dragging and dropping them onto the storyboard area. You can also click on the storyboard area or on the Add videos and photos button on the Home tab to browse for videos and photos. Windows Live Movie Maker supports most video, image, and audio file types. Select your files and add click Open to add them to Windows Live Movie Maker. By default WLMM doesn’t allow you to add files from network locations…so check out our article on how to add network support to Windows Live MovieMaker if the files you want to add are on a network drive. Layout All of your added clips will appear in the storyboard area on the right, while the currently selected clip will appear in the preview window on the left. You can adjust the size of the two areas by clicking and dragging the dividing line in the middle.    Make the clips on the storyboard bigger or smaller by clicking on the thumbnail size icon. The slider at the lower right adjusts the zoom time scale.   Previewing your Movie At any time, you can playback your movie and preview how it will look in the Preview window by clicking the space bar, or by pushing the play button under the preview window. You can also manually move the preview bar slider across the storyboard to view the clips as the video progresses. Adjusting Clips on the Storyboard You can click and drag clips on the storyboard to change the order in which the photos and videos appear.   Adding Music Nothing brings a movie to life quite like music. Selecting Add music will add your music to the beginning of the movie. Select Add music at the current point to include it in the movie to the current location of your preview bar slider, then browse for your music clip. WLMM supports many common audio files such as WAV, MP3, M4A, WMA, AIFF, and ASF. The music clip will appear above the video / photos clips on the storyboard.   You can change the location of music clips by clicking and dragging them to a different location on the storyboard. Add Titles, Captions, and Credits To add a Title screen to your movie, click the Title button on the Home tab. Type your title directly into the text box on the preview screen. The title will be placed at the location of the preview slider on the storyboard. However, you can change the location by clicking and dragging title to other areas of the storyboard. On the Format tab, there are a handful of text settings. You can change the font, color, size, alignment,  and transparency. The Adjust group allows you to change the background color, edit the text, and set the length of time the Title will appear in the movie.   The Effects group on the Format tab allows you to select an effect for your title screen. By hovering your cursor over each option, you will get a live preview of how each effect will appear in the preview window. Click to apply any of the effects. For captions, select where you want your caption to appear with the preview slider on the storyboard, then click the captions button on the Home tab. Just like the title, you type your caption directly into the text box on the preview screen, and you can make any adjustments by using the Font and Paragraph, Adjust, and Effects groups above. Credits are done the same as titles and captions, except they are automatically placed at the end of the movie.   Transitions Go to the Animation tab on the ribbon to apply transitions. Select a clip from the storyboard and hover over one of the transition to see it in the preview window. Click on the transition to apply it to the clip. You can apply transitions separately to clips or hold down Ctrl button while clicking to select multiple clips to which to apply the same transition. Pan and zoom effects are also located on the Animations tab, but can be applied to photos only. Like transition, you can apply them individually to a clip or hold down Ctrl button while clicking to select multiple clips to which to apply the same pan and zoom effect. Once applied, you can adjust the duration of the transitions and pan and zoom effects. You can also click the dropdown for additional transitions or effects. Visual Effects Similar to Pan and Zoom and Transitions, you can apply a variety of Visual Effects to individual or multiple clips. Editing Video and Music Note: This does not actually edit the original video you imported into your Windows Live Movie Maker project, only how it appears in your WLMM project. There are some very basic editing tools located on the Home tab. The Rotate left and Rotate right button will adjust any clip that may be oriented incorrectly. The Fit to music button will automatically adjust the duration of the photos (if you have any in your project) to fit the length of the music in your movie. Audio mix allows you to change the volume level   You can also do some slightly more advanced editing from the Edit tab. Select the video clip on the storyboard and click the Trim tool to edit or remove portions of a video clip. Next, click and drag the sliders in the preview windows to select the are you wish to keep. For example, the area outside the sliders is the area trimmed from the movie. The area inside is the section that is kept in the movie. You can also adjust the Start and End points manually on the ribbon.   When you are finished, click Save trim. You can also split your video clips. Move the preview slider to the location in the video clip where you’d like to split it, and select Split. Your video will be split into separate sections. Now you can apply different effects or move them to different locations on the storyboard. Editing Music Clips Select the music clip on the storyboard and then the Options tab on the ribbon. You can adjust the music volume by moving the slider right and left.   You can also choose to have your music clip fade in or out at the beginning and end of your movie. From the Fade in and Fade out dropdowns, select None, Slow, Medium, or Fast. To adjust the sound of your audio clips, click on the Edit tab, select the Video volume button, and adjust the slider. Move it all the way to the left to mute any background noise in your video clips.   AutoMovie As you have seen, Windows Live Movie Maker allows you to add effects, transitions, titles, and more. If you don’t want to do any of that stuff yourself, AutoMovie will automatically add title, credits, cross fade transitions between items, pan and zoom effects to photos, and fit your project to the music. Just select the AutoMovie button on the Home tab. You can go from zero to movie in literally a couple minutes.   Uploading to YouTube You can share your video on YouTube directly from Windows Live Movie Maker. Click on the YouTube icon in the Sharing group on the Home tab. You’ll be prompted for your YouTube username and password. Fill in the details about your movie and click Publish. The movie will be converted to WMV before being uploaded to YouTube. As soon as the YouTube conversion is complete, you’re new movie is live and ready to be viewed. Saving your Movie as a Video File Select the icon at the top left, then select Save movie. As you hover your mouse over each of the options, you will see the output display size, aspect ratio, and estimated file size per minute of video. All of these settings will output your movie as a WMV file. (Unfortunately, the only option is to save a movie as a WMV file.) The only difference is how they are encoded based on preset common settings. The Burn to DVD option also outputs a WMV file, but then opens Windows DVD Maker and walks you through the process of creating and burning a DVD.   If you choose the Burn to DVD option, close this window when the WMV file conversion is complete and the Windows DVD Maker will prompt you to begin. When your movie is finished, it’s time to relax and enjoy.   Conclusion Windows Live Movie Maker makes it easy for the average person to quickly churn out nice looking movies and slideshows from there own pictures and videos. However, long time users of previous editions (formerly called Windows Movie Maker) will likely be disappointed by some features missing in Windows Live Movie Maker that existed in earlier editions. Looking for details on burning your new project to DVD, check out our article on how to create and author DVDs with Windows DVD Maker. Download Windows Live Movie Maker Similar Articles Productive Geek Tips Family Fun: Share Photos with Photo Gallery and Windows Live SpacesCreate and Author DVDs in Windows 7Rotate a Video 90 degrees with VLC or Windows Live Movie MakerInstall Windows Live Essentials In Windows 7How to Make/Edit a movie with Windows Movie Maker in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • CodePlex Daily Summary for Tuesday, November 15, 2011

    CodePlex Daily Summary for Tuesday, November 15, 2011Popular ReleasesTHE NVL Maker: The NVL Maker Ver 3.10: 3.10 ??? ???: ·????????? ·????????? ·???“TJS”?“??”“EXP”?????“???”,???????? ·???“????”???,???????@if~@elsif~@else~@endif????? ·TJS????????? ·???????????else?endif??? ??: ·???FantasyDR?????????Wizard.exe(?????:http://code.google.com/p/nvlmaker-wizard/) ·KAGConfigEx2.exe??(?????:http://kcddp.keyfc.net/bbs/viewthread.php?tid=1374&extra=page%3D1) ·??????????skin??? ????: ·mapbutton????EXP??(??macro_map.ks) ·??????????AnimPlayer.ks?system????(??????AnimPlayer.ks???macro.ks) ·??????????????,?????...CreateHandouts: Latest Version: Latest VersionSQL Monitor - tracking sql server activities: SQLMon 4.1 alpha2: 1. improved object search, escape special characters, support search histories, and remember search option. 2. allow user to set connection time out. 3. allow user to drag & drop sql text or file to editors.SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...Windows Azure SDK for PHP: Windows Azure SDK for PHP v4.0.4: INSTALLATION Windows Azure SDK for PHP requires no special installation steps. Simply download the SDK, extract it to the folder you would like to keep it in, and add the library directory to your PHP include_path. INSTALLATION VIA PEAR Maarten Balliauw provides an unofficial PEAR channel via http://www.pearplex.net. Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPAzure Or if you've already installed PHPAzure before: pear upgrade p...QuickGraph, Graph Data Structures And Algorithms for .Net: 3.6.61116.0: Portable library build that allows to use QuickGraph in any .NET environment: .net 4.0, silverlight 4.0, WP7, Win8 Metro apps.Devpad: 4.7: Whats new for Devpad 4.7: New export to Rich Text New export to FlowDocument Minor Bug Fix's, improvements and speed upsWeapsy: 0.4.1 Alpha: Edit Text bug fixedDesktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbFluent Validation for .NET: 3.2: Changes since 3.1: Fixed issue #7084 (NotEmptyValidator does not work with EntityCollection<T>) Fixed issue #7087 (AbstractValidator.Custom ignores RuleSets and always runs) Removed support for WP7 for now as it doesn't support co/contravariance without crashing.RDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0Rawr: Rawr 4.2.7: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...VidCoder: 1.2.2: Updated Handbrake core to svn 4344. Fixed the 6-channel discrete mixdown option not appearing for AAC encoders. Added handling for possible exceptions when copying to the clipboard, added retries and message when it fails. Fixed issue with audio bitrate UI not appearing sometimes when switching audio encoders. Added extra checks to protect against reported crashes. Added code to upgrade encoding profiles on old queued items.Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...DotSpatial: DotSpatial Release Candidate 1 (1.0.823): Supports loading extensions using System.ComponentModel.Composition. DemoMap compiled as x86 so that GDAL runs on x64 machines. How to: Use an Assembly from the WebBe aware that your browser may add an identifier to downloaded files which results in "blocked" dll files. You can follow the following link to learn how to "Unblock" files. Right click on the zip file before unzipping, choose properties, go to the general tab and click the unblock button. http://msdn.microsoft.com/en-us/library...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.New ProjectsAFNC: testArithmetics: arithmetics for silverlight use note pattern by time streamAzon.Library: A collection of extensions, static helpers, AOP attributes. More will added as the project will go on.Chat TextBlock Control: A windows phone 7.1 control Resemble those chat balloon textblocks in the SMS appDiamond Framework: Diamond Framework an Common framework for Diamond Group.DNN Social Helpers: DNN Social HelpersDragon: DragonEasy Video Cropper: A simple application to make cropping videos easy for anyone. - Automatically detects black lines - Uses FFMPEGFluent Resource Mapper: This project aims to develop a framework to assist the internationalization of software using the paradigm Convetion over Configuration.Fully Observable: This project is to create an improved set of observable collections. It provides notifications for when items inside the collection change as well as when the collection itself changes.grpcmnq: no summary at allMathTool: Math tool for silverlight we plan will heve three point .matrix .differential equation .equation of locusnopCommerce Buckaroo payment provider plugin: This is a payment provider plugin for the dutch payment provider BUCKAROO. This plugin is developed and tested for nopCommerce version 2+ Phoenix MVVM+C Framework: Phoenix MVVM+C Framework PowerLib: PowerLib extends system .net library.RDRemote: This utility allows to enable the Remote Desktop connections from a remote computer using WMI.Sencha Touch Mini Workflow Framework: A workflow framework for Sencha Touch mobile apps including automatic component management ShWP: helper library for Windows PhoneTimer, Cronômetro e Despertador: Projeto desenvolvido no curso de extensão de C# da UFSCar SorocabaUtilityLibrary.Ajax: AjaxUtilityLibrary.Email: emailUtilityLibrary.FormBase: UtilityLibrary.FormBaseUtilityLibrary.Http: UtilityLibrary for HttpWebRequestUtilityLibrary.Ormapping: ormappingVoiceModel: VoiceModel is a project which make it easier to develop VoiceXML applications using ASP.Net MVC with Razor. It uses the MVVM (Model-View-VoiceModel) design pattern to abstract the voice application to a higher level. It is developed in C# and Razor.WebSite.Request: WebSite.Request launch web request (via XMLHTTP) on website. Use, for example, to make initial request to sharepoint URL and escape "slow first request" problem.Where's my lei, man?: Where's my lei, man?Zombsquare: Aplicación de ejemplo para Windows Phone utilizada en el Windows Phone Roadshow realizado en España en 2011, en esta solución podras encontra ejemplos de: -Diseño en Blend -BingMaps -GeoLocalizacion -Realidad Aumentada -Converters -Mini-trivial -Serialización de objetos ... resistir un apocalipsis Zombie...

    Read the article

  • Introducing… SharePress!

    - by Bil Simser
    For those that follow me I’ve been away from blogging and twittering for a couple of months. This is the reason. For the last few months I’ve been working with a cross-functional team putting together a new product from the people that run WordPress, the free premiere blogging platform. The result is a new product we call SharePress, a highly extensible blogging and content management platform with the usability of WordPress and the power of SharePoint combined into a single product. SharePress gives you SharePoint sites that are SEO-friendly delivered with a Web 2.0 ease of use, leveraging all of the existing abilities of SharePoint and WordPress that we know today. The Reason Back in December I was approached by the WordPress team about building a new platform that took advantage of the power of SharePoint but the ease of WordPress. I’m no stranger to WordPress and it’s 5 minute no-holds-barred install (I’ve always wanted SharePoint to do this!) and I run my personal blog on WordPress as does my better half, Princess Jenn. There’s always been a pitch by so-called Web 2.0 applications to deliver the power of SharePoint but the ease of [insert product here] over the past year or so. I checked each and every one of them out, but they fell woefully short when it came to SharePoint’s document management, versioning, and customization. They try, but it’s never been up to par in my books. On the flipside, SharePoint has always been tops in collaboration in the Enterprise but it’s painful to develop web parts, UI customization can be tricky, and there’s just no user community for something as simple as themes and designs. The Product Enter SharePress. Is it SharePoint? Is it WordPress? It’s both, and neither. Everything you like about both products are there but this is a bold new product that is positioned to bring SharePoint to the masses while maintaining the fidelity of an Enterprise 2.0 collaboration platform. SharePress delivers on all fronts including: The ability to leverage any WordPress/Joomla/Drupal/DotNetNuke themes and skins inside of SharePoint Run any WordPress/Drupal/Joomla/DotNetNuke/SharePoint plug-in/module/web part/feature works out of the box with SharePress SEO-friendly URLs and pages Permalinks for all content All the features of SharePoint Server 2010 (including InfoPath, Excel, and Access services) included in the price Small deployment footprint. You decide how much to deploy and where. Independent Database Abstraction Layer (iDal) that allows you to deploy to SQL Server 2005/2008, MySQL, and PostgreSQL Portable Rendering Engine Layer (PREL) so you host .NET or PHP on Apache or IIS (version 7 or higher). The install feature is built around WordPress and it’s famous 5-minute install (actually, it’s never taken me more than 1 minute). SharePress installs with two screens after the files are uploaded to your server (which can be done entirely using FTP): After you enter two fields of information click “Install SharePress” and you’ll be done: No mess, no fuss, no complicated dependencies, and no server access required! How simpler could this be? The Technology WordPress plug-ins and themes working with SharePoint? Of course! The answer is IronPython which has now reached a maturity level capable of doing on the fly code language conversions. SharePress is a brand new product not built on top of any previous platform but leverages all the power of each of those applications through a patent pending technique called SharePress Multi-plAtfoRm Technology (SMART). SMART will convert PHP code on the fly into Python (using SWIG as an intermediate processor) which is then compiled to MSIL and then delivered back as an ASP.NET MVC application (output is C# or VB.NET, but you can build your own SMART converter to output a different language). Sound complicated? It is, but it’s all behind the scenes and you don’t have to worry about a thing. This image illustrates the technology stack and process: So users can load up out of the box PHP themes and plug-ins from the WordPress/Joomla/Drupal community into the SMART converter and output MSIL that is used by the SharePress engine and rendered on the fly to the end user. Supported PHP versions are 4.xx and 5.xx with version 6 support to come when it’s released. Similarly you can take any .NET application, DotNetNuke Module, SharePoint Web Part or event handler and feed it into the converter to output the same. Everything is reverse compiled into MSIL so it becomes technology agnostic. No source code access is needed and the SMART converter can handle obfuscated .NET assemblies that were built with .NET 1.0, 1.1, 2.0, 3.5, and 4.0. With this technology you can also with the flip of a switch have the output create PHP pages for you. This allows you to run SharePress on Unix based systems running PHP and MySQL, allowing you to deliver your SharePoint like experience to your users with a $0 infrastructure footprint. Here’s SharePress with the default WordPress post imported then a stock SharePoint collaboration site was imported. The site was then applied with the default Kubrick theme from WordPress. The Features Deploy any of the freely available 100,000 WordPress/Joomla/Drupal themes instantly to your runtime SharePress environment and preview or activate them right from your browser. Built-in Web 2.0 jQuery Enabled End User and Administrator Web Interface. Never have to remote into a server again! Run any SharePoint Web Part or Event Handler directly without modification or access to source code in SharePress. Use any WordPress/Joomla/Drupal plug-in directly in SharePress, no local admin or access to server. Just upload and activate. Upload and Activate any SharePoint Solution Package to any site remotely. No rebuilding. Changes made to sites require no compiling or rebuilding and are published immediately. Password Protected Content. You can give passwords to individual posts, articles, pages, documents, forms, and list items. A powerful polymorphic Captcha system backs the security interface and vendors can easily tie into smart card readers, fingerprint readers, and retina scanners for authorization and identification. OpenID, Windows Live, and Windows Authentication are supported out of the box. Infinitely customizable and extensible. You can leverage plug-ins from the open source community to do practically anything, all configured and uploaded via the browser. Additionally the developer API (available soon) allows you to build extensions in .NET, PHP, and Python with little effort. Easy Importing. We have importers for Blogger, WordPress, Drupal, Joomla, DotNetNuke, and SharePoint so you can populate your site quickly and easily with full metadata modeling and creation. Banner Management. It’s easy to setup banners for your web site complete with impression numbers, special URLs, and more. Menu Manager. The Menu Manager allows you to create as many menus as you want, each one can be associated to specific audiences or roles and then be styled across multiple contexts including the same menu delivered as a fly out, rollover, drop down, and just about any navigation you can think of. Collaborative ShareBook. Our exclusive book feature allows you to setup a “book” and then authorize individuals to contribute content. Permalinks. All content in SharePress has a permanent or “perma link” associated with it so people can link to it freely without fear of broken links. Apache or IIS, Unix / Linux / BSD / Solaris / Windows / Mac OS X support. Deliver SharePress the way *you* want from the platform *you* decide. Database Independence. We know people wanted to run on any database platform so SharePress is built on top of a database abstraction layer that allows you to run on SQL Server, MySQL, PostgreSQL. Other databases can be supported by writing a supporting database script consisting of fourteen function calls. The script can be written in Perl, Python, AWK, PowerShell, Unix Shell scripts, VBA, or simple DOS batch files. The Team SharePress is the work of a lot of people in both the WordPress and SharePoint community. I worked with a lot of SharePoint MVPs to create this new product as we really wanted to deliver the most compatible and feature rich system in a product that we would be proud of. Many thanks go out to Eli Bleeker, Todd Robillard, Scot Larson, Daniel Hillier, Shane Fox, Box Peran, Amanda English, and Bill Murray for doing the heavy lifting and all of their expertise and innovative thinking to get this product out. Licensing and Pricing SharePress is still in the final stages for pricing but we’re looking at a price point somewhere between $99-$100 to make it affordable for everyone. We plan to announce final pricing sometime in the next few weeks. There are no additional charges for Enterprise versions or additional features. Everything you see is what’s available and it’s just a matter of lighting up your site with whatever feature you want to enable. The product will not be open source but source code licenses will be available to ISVs who are interested in interfacing with the API at a low level. Cost will be $25,000 USD per developer and gives you complete access to the source code to the SharePress Foundation System and the .NET 4.0 Framework source code. Conclusion We hope you enjoy the launch of SharePress as the new premium blogging and content management platform for both Intranets and the Internet. We think we’ve build the best of breed solutions here and made it easy for anyone to get started with a minimal of infrastructure but allow the scalability of SharePress to shine through in the Enterprise 2.0 world. We encourage your feedback so please leave comments as to what you’re looking for in this system as we’re always evolving it to make it a better product for everyone.

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • Translating with Google Translate without API and C# Code

    - by Rick Strahl
    Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish. Here's what this looks like in the resource administration form: When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going. Ch… Ch… Changes Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken. Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).   However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement. Here's the code: /// <summary> /// Translates a string into another language using Google's translate API JSON calls. /// <seealso>Class TranslationServices</seealso> /// </summary> /// <param name="Text">Text to translate. Should be a single word or sentence.</param> /// <param name="FromCulture"> /// Two letter culture (en of en-us, fr of fr-ca, de of de-ch) /// </param> /// <param name="ToCulture"> /// Two letter culture (as for FromCulture) /// </param> public string TranslateGoogle(string text, string fromCulture, string toCulture) { fromCulture = fromCulture.ToLower(); toCulture = toCulture.ToLower(); // normalize the culture in case something like en-us was passed // retrieve only en since Google doesn't support sub-locales string[] tokens = fromCulture.Split('-'); if (tokens.Length > 1) fromCulture = tokens[0]; // normalize ToCulture tokens = toCulture.Split('-'); if (tokens.Length > 1) toCulture = tokens[0]; string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", HttpUtility.UrlEncode(text),fromCulture,toCulture); // Retrieve Translation with HTTP GET call string html = null; try { WebClient web = new WebClient(); // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?) web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0"); web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8"); // Make sure we have response encoding to UTF-8 web.Encoding = Encoding.UTF8; html = web.DownloadString(url); } catch (Exception ex) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " + ex.GetBaseException().Message; return null; } // Extract out trans":"...[Extracted]...","from the JSON string string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value; if (string.IsNullOrEmpty(result)) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult; return null; } //return WebUtils.DecodeJsString(result); // Result is a JavaScript string so we need to deserialize it properly JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Deserialize(result, typeof(string)) as string; } To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages: string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de"); TestContext.WriteLine(result); How it works The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns. The JSON result returned looks like this: {"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24} I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.). Couple of odd things to note in this code: First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least. The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser. It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems. Limitations There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the ".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length. Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works. FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above. I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily. Resources Live Localization Sample Localization Resource Provider Administration form that includes options to translate text using Google and Babelfish interactively. TranslationService.cs The full source code in the West Wind West Wind Web Toolkit's Globalization library that contains the translation code. © Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  HTTP   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Friday, December 17, 2010

    CodePlex Daily Summary for Friday, December 17, 2010Popular ReleasesVCC: Latest build, v2.1.31217.1: Automatic drop of latest buildBCrypt.Net: BCrypt.Net R4: Fixed a integer overflow at workFactor = 31LiveChat Starter Kit: LCSK v1.0: This is a working version of the LCSK for Visual Studio 2010, ASP.NET MVC 3 (using Razor View Engine). this is still provider based (with 1 provider Sql) and this is still using WebService and Windows Forms operator console. The solution is cleaner, with an installer to create tables etc. Let me know your feedbackOrchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...SharpDropBox Client for .NET: WP7 SharpDropBox Client - 0.1 Technology Preview: I decided to go ahead and release this. It works well for simple browsing folder structure/downloading files (and login works). See samples for an example of how to use it. I am in progress with a couple other methods which aren't currently working.SQL Monitor: SQL Monitor 2.9: 1. automatically set sql for new query if a object is selected(table/sp/function/view)SplendidCRM: SplendidCRM 5.0 Community Edition: SplendidCRM Software has adopted the GNU Affero General Public License Version 3 (AGPLv3) for its Community Edition. This release includes the full set of SQL source code in the Community Edition, something that was previously only available in the Professional and Enterprise Editions. An article on the subject of Commercial Open-Source licensing has been posted at http://www.codeproject.com/KB/architecture/splendid-guide-article6.aspx.DotSpatial: DotSpatial 12-15-2010: This release contains a few minor bug fixes and hopefully the GDAL libraries for the 3.5 x86 build actually built to the correct directory this time.DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Access Control Service Samples and Documentation (Labs): Samples-R3: Contains latest ACS samples (corresponding to R3 release) that show how to integrate ACS with web services, ASP.NET websites (Web Forms and MVC) and on how to interact with the ACS Management Service. The Readmes for these samples are available here.TweetSharp: TweetSharp v2.0.0.0 - Preview 5: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous ...EnhSim: EnhSim 2.2.2 ALPHA: 2.2.2 ALPHAThis release adds in the changes for 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - The spirit ...Silverlight Contrib: Silverlight Contrib 2010.1.0: 2010.1.0 New FeaturesCompatibility Release for Silverlight 4 and Visual Studio 2010FlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010SuperWebSocket: SuperWebSocket Drop 2: Changes: based on SuperSocket 1.3 supported sub protocol supported SSL/TLS encryption (wss) in Sync socket mode fixed some data communication bugsWii Backup Fusion: Wii Backup Fusion 0.9 Beta: - Aqua or brushed metal style for Mac OS X - Shows selection count beside ID - Game list selection mode via settings - Compare Files <-> WBFS game lists - Verify game images/DVD/WBFS - WIT command line for log (via settings) - Cancel possibility for loading games process - Progress infos while loading games - Localization for dates - UTF-8 support - Shortcuts added - View game infos in browser - Transfer infos for log - All transfer routines rewritten - Extract image from image/WBFS - Support....NETTER Code Starter Pack: v1.0.beta: '.NETTER Code Starter Pack ' contains a gallery of Visual Studio 2010 solutions leveraging latest and new technologies and frameworks based on Microsoft .NET Framework. Each Visual Studio solution included here is focused to provide a very simple starting point for cutting edge development technologies and framework, using well known Northwind database (for database driven scenarios). The current release of this project includes starter samples for the following technologies: ASP.NET Dynamic...New ProjectsaoleFilter: This is a Filter by MagicshuiChocottone: Simple to-do listData Access Engine (DAE): Data Access Engine (DAE) is an open source and free .NET component to access all popular DBMSs such as Microsoft SQL Server, MySQL, Oracle, Microsoft Access, SQLite and databases that connected by ODBC. DAE helps to connect different DBMSs at the same time. DependencyEvaluation: Programmatically sort your objects based on dependencies. Would work as a compiler framework, project planning, data binding, etc.Doc2Text Converter: A converter that can convert document files(like .doc,.ppt,and .pdf) to plain text.Dynamics AX Test Runner for Visual Studio 2010: Invoke Dynamics AX Test cases from within Visual Studio 2010 and retrieve the results.ExtensibleExtensions: Pack of extensions, firstly text utilities like pluralize, capitalize will be included.FluentHttp: .NET fluent api wrapper for creating restful web requests.K:L:O:N:K Updater Service: The K:L:O:N:K Updater Service is a deployment tool which can be used to deploy Microsoft Installer (msi ), zip or other file formats. A directory is setup to be the deployment directory. Put files into this directory and the packages are distributed and installed at clients.Keyki: my code repositoryLED Editor: Simple project for editing a LED sequence ...Maui: SchedulerMcAfee Vulnerability Manager - Delta Report: Processes two .CSV files generated by McAfee Vulnerability Manager to highlight which vulnerabilities were patched or are still outstanding.milkway Project: A java web project under Spring. Galaxy is an enterprise wiki system.Minecraft Save Wizard: Do you like minecraft ? Do you like it so much that you wish there were more worlds ? Well now you can have as many worlds as you desire. Simply move them to and from your saves folder to a backup folder using this software. It couldn't be simpler ;)Minis Manager: A manager for miniature figures to use for rpgs etc.Model2Form: An ASP.NET Control similar to GridView but it auto builds a Web From in run-time by binding a Model. OBD C# Wrapper: OBD C# Wrapper I want to help peaople to get data from an OBD system. The idea is to create a C# class with preconfigured methods for load values and for use them in a GUI. With this class people have to focalized on the GUI design and not on the interface with OBD.Opalis Extension Local Group and User Management: A Opalis Integration pack allowing for management of local computer groups and users.Opalis Integration Pack: VMWare VSphere: An integration Pack for Opalis. Extending Opalis to integrate fully with VMWare. Built using the Vmware Powershell CMDLets wrapped in C#.Opalis Utilities: An integration pack for Opalis. Extending Opalis to provide some addition UtilitiesOrchard Maps: A Maps module for OrchardOur ICProject: IC 2011 projectpatterns & practices: Project Silk: Project Silk provides guidance and sample implementations that describe and illustrate recommended practices for building next generation web applications using web technologies like HTML5, jQuery, CSS3 and IE9. pianduan: ????pob: xna game in developmentpscommand Firefox Extension: A Firefox extension which allows user to invoke PowerShell commands on links.R.NET: R.NET enables .NET Framework to collaborate with R statistical computing. R.NET requires .NET Framework 4 and R.dll. You already have the DLL in the `bin' folder if you installed R environment, and you need no other extra installations. R.NET is developed in C#.Rough Set tool set: Rough Set Tool SetSerial Port Terminal (SPTerm): Serial Port Terminal (SPTerm) is used for basic communication using serial port (com). Sending bytes and ASCII from PC can be done using SPTerm. It is useful for micro controller projects for UART and simple transmission. Hex data can be sent out directly from text box in SPTermSLGame: NullThe Jumping Point: TJP is a 2 player sidescroller based on SFML. TJP is developed in C++ and will be available for both linux and windows.UIT CRM: Ð? án môn h?c Qu?n lý d? án CNTT c?a nhóm. (tru?ng ÐH CNTT - ÐHQG TpHCM)

    Read the article

  • "Translator by Moth"

    - by Daniel Moth
    This article serves as the manual for the free Windows Phone 7 app called "Translator by Moth". The app is available from the following link (browse the link on your Window Phone 7 phone, or from your PC with zune software installed): http://social.zune.net/redirect?type=phoneApp&id=bcd09f8e-8211-e011-9264-00237de2db9e   Startup At startup the app makes a connection to the bing Microsoft Translator service to retrieve the available languages, and also which languages offer playback support (two network calls total). It populates with the results the two list pickers ("from" and "to") on the "current" page. If for whatever reason the network call fails, you are informed via a message box, and the app keeps trying to make a connection every few seconds. When it eventually succeeds, the language pickers on the "current" page get updated. Until it succeeds, the language pickers remain blank and hence no new translations are possible. As you can guess, if the Microsoft Translation service add more languages for textual translation (or enables more for playback) the app will automatically pick those up. "current" page The "current" page is the main page of the app with language pickers, translation boxes and the application bar. Language list pickers The "current" page allows you to pick the "from" and "to" languages, which are populated at start time. Until these language get populated with the results of the network calls, they remain empty and disabled. When enabled, tapping on either of them brings up on a full screen popup the list of languages to pick from, formatted as English Name followed by Native Name (when the latter is known). The "to" list, in addition to the language names, indicates which languages have playback support via a * in front of the language name. When making a selection for the "to" language, and if there is text entered for translation, a translation is performed (so there is no need to tap on the "translate" application bar button). Note that both language choices are remembered between different launches of the application.   text for translation The textbox where you enter the translation is always enabled. When there is nothing entered in it, it displays (centered and in italics) text prompting you to enter some text for translation. When you tap on it, the prompt text disappears and it becomes truly empty, waiting for input via the keyboard that automatically pops up. The text you type is left aligned and not in italic font. The keyboard shows suggestions of text as you type. The keyboard can be dismissed either by tapping somewhere else on the screen, or via tapping on the Windows Phone hardware "back" button, or via taping on the "enter" key. In the latter case (tapping on the "enter" key), if there was text entered and if the "from" language is not blank, a translation is performed (so there is no need to tap on the "translate" application bar button). The last text entered is remembered between application launches. translated text The translated text appears below the "to" language (left aligned in normal font). Until a translation is performed, there is a message in that space informing you of what to expect (translation appearing there). When the "current" page is cleared via the "clear" application bar button, the translated text reverts back to the message. Note a subtle point: when a translation has been performed and subsequently you change the "from" language or the text for translation, the translated text remains in place but is now in italic font (attempting to indicate that it may be out of date). In any case, this text is not remembered between application launches. application bar buttons and menus There are 4 application bar buttons and 4 application bar menus. "translate" button takes the text for translation and translates it to the translated text, via a single network call to the bing Microsoft Translator service. If the network call fails, the user is informed via a message box. The button is disabled when there is no "from" language available or when there is not text for translation entered. "play" button takes the translated text and plays it out loud in a native speaker's voice (of the "to" language), via a single network call to the bing Microsoft Translator service. If the network call fails, the user is informed via a message box. The button is disabled when there is no "to" language available or when there is no translated text available. "clear" button clears any user text entered in the text for translation box and any translation present in the translated text box. If both of those are already empty, the button is disabled. It also stops any playback if there is one in flight. "save" button saves the entire translation ("from" language, "to" language, text for translation, and translated text) to the bottom of the "saved" page (described later), and simultaneously switches to the "saved" page. The button is disabled if there is no translation or the translation is not up to date (i.e. one of the elements have been changed). "swap to and from languages" menu swaps around the "from" and "to" languages. It also takes the translated text and inserts it in the text for translation area. The translated text area becomes blank. The menu is disabled when there is no "from" and "to" language info. "send translation via sms" menu takes the translated text and creates an SMS message containing it. The menu is disabled when there is no translation present. "send translation via email" menu takes the translated text and creates an email message containing it (after you choose which email account you want to use). The menu is disabled when there is no translation present. "about" menu shows the "about" page described later. "saved" page The "saved" page is initially empty. You can add translations to it by translating text on the "current" page and then tapping the application bar "save" button. Once a translation appears in the list, you can read it all offline (both the "from" and "to" text). Thus, you can create your own phrasebook list, which is remembered between application launches (it is stored on your device). To listen to the translation, simply tap on it – this is only available for languages that support playback, as indicated by the * in front of them. The sound is retrieved via a single network call to the bing Microsoft Translator service (if it fails an appropriate message is displayed in a message box). Tap and hold on a saved translation to bring up a context menu with 4 items: "move to top" menu moves the selected item to the top of the saved list (and scrolls there so it is still in view) "copy to current" menu takes the "from" and "to" information (language and text), and populates the "current" page with it (switching at the same time to the current page). This allows you to make tweaks to the translation (text or languages) and potentially save it back as a new item. Note that the action makes a copy of the translation, so you are not actually editing the existing saved translation (which remains intact). "delete" menu deletes the selected translation. "delete all" menu deletes all saved translations from the "saved" page – there is no way to get that info back other than re-entering it, so be cautious. Note: Once playback of a translation has been retrieved via a network call, Windows Phone 7 caches the results. What this means is that as long as you play a saved translation once, it is likely that it will be available to you for some time, even when there is no network connection.   "about" page The "about" page provides some textual information (that you can view in the screenshot) including a link to the creator's blog (that you can follow on your Windows Phone 7 device). Use that link to discover the email for any feedback. Other UI design info As you can see in the screenshots above, "Translator by Moth" has been designed from scratch for Windows Phone 7, using the nice pivot control and application bar. It also supports both portrait and landscape orientations, and looks equally good in both the light and the dark theme. Other than the default black and white colors, it uses the user's chosen accent color (which is blue in the screenshot examples above). Feedback and support Please report (via the email on the blog) any bugs you encounter or opportunities for performance improvements and they will be fixed in the next update. Suggestions for new features will be considered, but given that the app is FREE, no promises are made. If you like the app, don't forget to rate "Translator by Moth" on the marketplace. Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163  | Next Page >