Search Results

Search found 1742 results on 70 pages for 'combine'.

Page 20/70 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Optimising For Google and Bing - How Distinct Are They?

    With Bing and Yahoo set to combine soon enough, the Bing search engine will probably count for near to 30% of the search engine business, this means it could be somewhat beneficial to optimise for both Bing and Google in the not too remote future. Now that does not really entail that you ought to implement tremendous modifications to the optimisation approaches you presently use, as an evaluation by SEOmoz reveals that the two engines appear to be growing to be increasingly more similar.

    Read the article

  • How to Optimize Your Website With SEO Placement Strategies

    The best way to win the SEO optimization game is to use a combination of different techniques. Compelling content, unique keywords and link exchange are methods that have proven to be effective in optimizing your website. The next step is to combine all these into a frequent, if not daily, activity.

    Read the article

  • Formatting a a memory stick with two partitions?

    - by Marius
    I have a 16GB memorystick which used to have a Linux partition. It therefore has two partitions; 2GB FAT32 and 14GB linux boot drive. The linux part stopped working, so I decided to reinstall it. But windows can't see that partition. I tried formatting the whole disk, but I can only format one partition (the FAT32). There seems to be no way to combine the two partitions into one big one, and there seems to be no way for windows to partition the large part of the memorystick to but Linux on it. In the windows partition manager, windows sees the large unused partition, and it let me delete it. But once I have deleted it, I'm not allowed to format it. Also I cannot delete or resize the small partition. So, to summarize: I have a memorystick with two partitons. Windows only sees one of them, and won't let me use the other one. I would like to combine the two partitions so I can install Linux on the memory stick again.

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • AviSynth ChangeFPS: combining videos with different framerates

    - by Daniel Saner
    I have two video recordings of the same scene, but with different framerates, that I would like to combine using an AviSynth script. One video is recorded at 30fps, the other at 120fps. What I would like to do is to keep them temporally synchronised, meaning that for each frame of the 30fps video, the output should display 4 frames from the 120fps video. I would like the final output video to play at 30fps so that the duration is 4 times the original recordings. From AviSynth's documentation, it seems ChangeFPS is the function I'll need, since it removes and duplicates frames, while 'AssumeFPS' just changes the playback speed (and my plan is basically to quadruple every frame of the 30fps clip). However, the filter does not seem to do what it says. If I try: clip30 = AviSource("0326.avi").ChangeFPS(120) clip120 = AviSource("0326-120fps.avi") it doesn't affect the playback speed or frame count of the 30fps clip at all, but removes every fourth frame from the 120fps clip, which is not at all what I want. Unfortunately, appending .ChangeFPS(7.5) to the clip120 instead does not have the same inverse effect—in that case, it does exactly what's to be expected. Alternatively, if I try: clip30 = AviSource("0326.avi").AssumeFPS(7.5) clip120 = AviSource("0326-120fps.avi") there is no effect at all, both clips are played back at 30fps, meaning that only a quarter of the 120fps clip has been shown by the time the 30fps clip is over. So how can I combine these two clips in the manner I want? I was unable to find any other internal or external filters that would help me do that. It seems to me that if ChangeFPS did what the manual says, it would be the right one for the job.

    Read the article

  • Formatting a memory stick with two partitions?

    - by Marius
    I have a 16GB memorystick which used to have a Linux partition. It therefore has two partitions; 2GB FAT32 and 14GB linux boot drive. The linux part stopped working, so I decided to reinstall it. But windows can't see that partition. I tried formatting the whole disk, but I can only format one partition (the FAT32). There seems to be no way to combine the two partitions into one big one, and there seems to be no way for windows to partition the large part of the memorystick to but Linux on it. In the windows partition manager, windows sees the large unused partition, and it let me delete it. But once I have deleted it, I'm not allowed to format it. Also I cannot delete or resize the small partition. So, to summarize: I have a memorystick with two partitons. Windows only sees one of them, and won't let me use the other one. I would like to combine the two partitions so I can install Linux on the memory stick again.

    Read the article

  • How to always show titles on Windows 7 Taskbar thumbail preview?

    - by Cooper
    I often use the Win+n shortcuts to 'alt-tab' between windows of pinned application (i.e. Win+2 to basically 'alt-tab' multiple putty windows, where putty is pinned as app #2 on the taskbar). The Win+n always pops up the thumbnail previews of all the windows, but sometimes it shows window titles above the thumbnail, sometimes it doesn't. Is there any way to always show the titles (i.e. registry setting?)? For putty sessions this would be especially nice, as the title contains the hostname, and the thumbnail is too small to determine what host that window is for. I've noticed that the titles usually show up with there are more than ~4-6 windows of that pinned item open - but the number seems to vary - is there a threshold setting for this? Update: So I just noticed the titles show up whenever the taskbar buttons combine, which varies based on how many windows from other apps are open... So I'm now looking for a way to combine buttons (but I'd like to keep labels, so the options in the 'Taskbar and Start Menu Properties' are close, but probably finding the registry settings behind should do it.

    Read the article

  • Novo Suporte para Combinação e Minificação de Arquivos JavaScript e CSS (Série de posts sobre a ASP.NET 4.5)

    - by Leniel Macaferi
    Este é o sexto post de uma série de posts que estou escrevendo sobre a ASP.NET 4.5. Os próximos lançamentos do .NET e Visual Studio incluem vários novos e ótimos recursos e capacidades. Com a ASP.NET 4.5 você vai ver um monte de melhorias realmente emocionantes em formulários da Web ( Web Forms ) e MVC - assim como no núcleo da base de código da ASP.NET, no qual estas tecnologias são baseadas. O post de hoje cobre um pouco do trabalho que estamos realizando para adicionar suporte nativo para combinação e minificação de arquivos JavaScript e CSS dentro da ASP.NET - o que torna mais fácil melhorar o desempenho das aplicações. Este recurso pode ser utilizado por todas as aplicações ASP.NET, incluindo tanto a ASP.NET MVC quanto a ASP.NET Web Forms. Noções básicas sobre Combinação e Minificação Como mais e mais pessoas usando dispositivos móveis para navegar na web, está se tornando cada vez mais importante que os websites e aplicações que construímos tenham um bom desempenho neles. Todos nós já tentamos carregar sites em nossos smartphones - apenas para, eventualmente, desistirmos em meio à frustração porque os mesmos são carregados lentamente através da lenta rede celular. Se o seu site/aplicação carrega lentamente assim, você está provavelmente perdendo clientes em potencial por causa do mau desempenho/performance. Mesmo com máquinas desktop poderosas, o tempo de carregamento do seu site e o desempenho percebido podem contribuir enormemente para a percepção do cliente. A maioria dos websites hoje em dia são construídos com múltiplos arquivos de JavaScript e CSS para separar o código e para manter a base de código coesa. Embora esta seja uma boa prática do ponto de vista de codificação, muitas vezes isso leva a algumas consequências negativas no tocante ao desempenho geral do site. Vários arquivos de JavaScript e CSS requerem múltiplas solicitações HTTP provenientes do navegador - o que pode retardar o tempo de carregamento do site.  Exemplo Simples A seguir eu abri um site local no IE9 e gravei o tráfego da rede usando as ferramentas do desenvolvedor nativas do IE (IE Developer Tools) que podem ser acessadas com a tecla F12. Como mostrado abaixo, o site é composto por 5 arquivos CSS e 4 arquivos JavaScript, os quais o navegador tem que fazer o download. Cada arquivo é solicitado separadamente pelo navegador e retornado pelo servidor, e o processo pode levar uma quantidade significativa de tempo proporcional ao número de arquivos em questão. Combinação A ASP.NET está adicionando um recurso que facilita a "união" ou "combinação" de múltiplos arquivos CSS e JavaScript em menos solicitações HTTP. Isso faz com que o navegador solicite muito menos arquivos, o que por sua vez reduz o tempo que o mesmo leva para buscá-los. A seguir está uma versão atualizada do exemplo mostrado acima, que tira vantagem desta nova funcionalidade de combinação de arquivos (fazendo apenas um pedido para JavaScript e um pedido para CSS): O navegador agora tem que enviar menos solicitações ao servidor. O conteúdo dos arquivos individuais foram combinados/unidos na mesma resposta, mas o conteúdo dos arquivos permanece o mesmo - por isso o tamanho do arquivo geral é exatamente o mesmo de antes da combinação (somando o tamanho dos arquivos separados). Mas note como mesmo em uma máquina de desenvolvimento local (onde a latência da rede entre o navegador e o servidor é mínima), o ato de combinar os arquivos CSS e JavaScript ainda consegue reduzir o tempo de carregamento total da página em quase 20%. Em uma rede lenta a melhora de desempenho seria ainda maior. Minificação A próxima versão da ASP.NET também está adicionando uma nova funcionalidade que facilita reduzir ou "minificar" o tamanho do download do conteúdo. Este é um processo que remove espaços em branco, comentários e outros caracteres desnecessários dos arquivos CSS e JavaScript. O resultado é arquivos menores, que serão enviados e carregados no navegador muito mais rapidamente. O gráfico a seguir mostra o ganho de desempenho que estamos tendo quando os processos de combinação e minificação dos arquivos são usados ??em conjunto: Mesmo no meu computador de desenvolvimento local (onde a latência da rede é mínima), agora temos uma melhoria de desempenho de 40% a partir de onde originalmente começamos. Em redes lentas (e especialmente com clientes internacionais), os ganhos seriam ainda mais significativos. Usando Combinação e Minificação de Arquivos dentro da ASP.NET A próxima versão da ASP.NET torna realmente fácil tirar proveito da combinação e minificação de arquivos dentro de projetos, possibilitando ganhos de desempenho como os que foram mostrados nos cenários acima. A forma como ela faz isso, te permite evitar a execução de ferramentas personalizadas/customizadas, como parte do seu processo de construção da aplicação/website - ao invés disso, a ASP.NET adicionou suporte no tempo de execução/runtime para que você possa executar a combinação/minificação dos arquivos dinamicamente (cacheando os resultados para ter certeza de que a performance seja realmente satisfatória). Isto permite uma experiência de desenvolvimento realmente limpa e torna super fácil começar a tirar proveito destas novas funcionalidades. Vamos supor que temos um projeto simples com 4 arquivos JavaScript e 6 arquivos CSS: Combinando e Minificando os Arquivos CSS Digamos que você queira referenciar em uma página todas as folhas de estilo que estão dentro da pasta "Styles" mostrada acima. Hoje você tem que adicionar múltiplas referências para os arquivos CSS para obter todos eles - o que se traduziria em seis requisições HTTP separadas: O novo recurso de combinação/minificação agora permite que você combine e minifique todos os arquivos CSS da pasta Styles - simplesmente enviando uma solicitação de URL para a pasta (neste caso, "styles"), com um caminho adicional "/css" na URL. Por exemplo:    Isso fará com que a ASP.NET verifique o diretório, combine e minifique os arquivos CSS que estiverem dentro da pasta, e envie uma única resposta HTTP para o navegador com todo o conteúdo CSS. Você não precisa executar nenhuma ferramenta ou pré-processamento para obter esse comportamento. Isso te permite separar de maneira limpa seus estilos em arquivos CSS separados e condizentes com cada funcionalidade da aplicação mantendo uma experiência de desenvolvimento extremamente limpa - e mesmo assim você não terá um impacto negativo de desempenho no tempo de execução da aplicação. O designer do Visual Studio também vai honrar a lógica de combinação/minificação - assim você ainda terá uma experiência WYSWIYG no designer dentro VS. Combinando e Minificando os Arquivos JavaScript Como a abordagem CSS mostrada acima, se quiséssemos combinar e minificar todos os nossos arquivos de JavaScript em uma única resposta, poderíamos enviar um pedido de URL para a pasta (neste caso, "scripts"), com um caminho adicional "/js":   Isso fará com que a ASP.NET verifique o diretório, combine e minifique os arquivos com extensão .js dentro dele, e envie uma única resposta HTTP para o navegador com todo o conteúdo JavaScript. Mais uma vez - nenhuma ferramenta customizada ou etapas de construção foi necessária para obtermos esse comportamento. Este processo funciona em todos os navegadores. Ordenação dos Arquivos dentro de um Pacote Por padrão, quando os arquivos são combinados pela ASP.NET, eles são ordenados em ordem alfabética primeiramente, exatamente como eles são mostrados no Solution Explorer. Em seguida, eles são automaticamente reorganizados de modo que as bibliotecas conhecidas e suas extensões personalizadas, tais como jQuery, MooTools e Dojo sejam carregadas antes de qualquer outra coisa. Assim, a ordem padrão para a combinação dos arquivos da pasta Scripts, como a mostrada acima será: jquery-1.6.2.js jquery-ui.js jquery.tools.js a.js Por padrão, os arquivos CSS também são classificados em ordem alfabética e depois são reorganizados de forma que o arquivo reset.css e normalize.css (se eles estiverem presentes na pasta) venham sempre antes de qualquer outro arquivo. Assim, o padrão de classificação da combinação dos arquivos da pasta "Styles", como a mostrada acima será: reset.css content.css forms.css globals.css menu.css styles.css A ordenação/classificação é totalmente personalizável, e pode ser facilmente alterada para acomodar a maioria dos casos e qualquer padrão de nomenclatura que você prefira. O objetivo com a experiência pronta para uso, porém, é ter padrões inteligentes que você pode simplesmente usar e ter sucesso com os mesmos. Qualquer número de Diretórios/Subdiretórios é Suportado No exemplo acima, nós tivemos apenas uma única pasta "Scripts" e "Styles" em nossa aplicação. Isso funciona para alguns tipos de aplicação (por exemplo, aplicações com páginas simples). Muitas vezes, porém, você vai querer ter múltiplos pacotes/combinações de arquivos CSS/JS dentro de sua aplicação - por exemplo: um pacote "comum", que tem o núcleo dos arquivos JS e CSS que todas as páginas usam, e então arquivos específicos para páginas ou seções que não são utilizados globalmente. Você pode usar o suporte à combinação/minificação em qualquer número de diretórios ou subdiretórios em seu projeto - isto torna mais fácil estruturar seu código de forma a maximizar os benefícios da combinação/minificação dos arquivos. Cada diretório por padrão pode ser acessado como um pacote separado e endereçável através de uma URL.  Extensibilidade para Combinação/Minificação de Arquivos O suporte da ASP.NET para combinar e minificar é construído com extensibilidade em mente e cada parte do processo pode ser estendido ou substituído. Regras Personalizadas Além de permitir a abordagem de empacotamento - baseada em diretórios - que vem pronta para ser usada, a ASP.NET também suporta a capacidade de registrar pacotes/combinações personalizadas usando uma nova API de programação que estamos expondo.  O código a seguir demonstra como você pode registrar um "customscript" (script personalizável) usando código dentro da classe Global.asax de uma aplicação. A API permite que você adicione/remova/filtre os arquivos que farão parte do pacote de maneira muito granular:     O pacote personalizado acima pode ser referenciado em qualquer lugar dentro da aplicação usando a referência de <script> mostrada a seguir:     Processamento Personalizado Você também pode substituir os pacotes padrão CSS e JavaScript para suportar seu próprio processamento personalizado dos arquivos do pacote (por exemplo: regras personalizadas para minificação, suporte para Saas, LESS ou sintaxe CoffeeScript, etc). No exemplo mostrado a seguir, estamos indicando que queremos substituir as transformações nativas de minificação com classes MyJsTransform e MyCssTransform personalizadas. Elas são subclasses dos respectivos minificadores padrão para CSS e JavaScript, e podem adicionar funcionalidades extras:     O resultado final desta extensibilidade é que você pode se plugar dentro da lógica de combinação/minificação em um nível profundo e fazer algumas coisas muito legais com este recurso. Vídeo de 2 Minutos sobre Combinação e Minificacão de Arquivos em Ação Mads Kristensen tem um ótimo vídeo de 90 segundo (em Inglês) que demonstra a utilização do recurso de Combinação e Minificação de Arquivos. Você pode assistir o vídeo de 90 segundos aqui. Sumário O novo suporte para combinação e minificação de arquivos CSS e JavaScript dentro da próxima versão da ASP.NET tornará mais fácil a construção de aplicações web performáticas. Este recurso é realmente fácil de usar e não requer grandes mudanças no seu fluxo de trabalho de desenvolvimento existente. Ele também suporta uma rica API de extensibilidade que permite a você personalizar a lógica da maneira que você achar melhor. Você pode facilmente tirar vantagem deste novo suporte dentro de aplicações baseadas em ASP.NET MVC e ASP.NET Web Forms. Espero que ajude, Scott P.S. Além do blog, eu uso o Twitter para disponibilizar posts rápidos e para compartilhar links.Lidar com o meu Twitter é: @scottgu Texto traduzido do post original por Leniel Macaferi. google_ad_client = "pub-8849057428395760"; /* 728x90, created 2/15/09 */ google_ad_slot = "4706719075"; google_ad_width = 728; google_ad_height = 90;

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him.

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Create Custom Speech Bubbles in Silverlight.

    - by mbcrump
    I had a reader email me the following question: “How do you create Speech Bubbles in Silverlight/WPF without adding any extra .dlls? Right off the bat, I know at least two ways to create the speech bubbles that look just like the ones in comic books. Using the Callout Shapes included with Blend 4. Using the free 3rd party control named FreeBubbles (I used this before Blend 4). Unfortunately, we cannot use either of these as they will both add extra .dll’s to the project. So why wouldn’t you want to use one of those? I can think of a few reasons: You do not want to increase the size of your .XAP by including extra .dll’s. You do not have Expression Blend or the license to the use the .dll’s. You want a custom Speech Bubble that is not included in the four “Callout” Controls with Blend. Instead of using one of these methods, we will create a Speech Bubble in Blend 4 using Path element and a TextBlock. Before we get started, lets look at the Callout Shapes included with Blend 4. Using Blend 4 you can simply drag/drop these controls onto your Silverlight application and you are ready to go. We can create all of these Speech Bubbles and even some of the modern bubbles used in recent comic books. Lets get started. Start up Expression Blend 4 and select the Pen Tool. On the Art Board, start connecting the dots like I did below. You can add a color if you wish. …keep going …complete Let’s go ahead and add some text to the Speech Bubble. Drag a TextBlock from the Panel and put it directly inside the Speech Bubble. Go ahead and set the TextAlignment to Center for the TextBlock. and give it some text. At this point, you could go ahead and create a user control if you want to reuse the Speech Bubble you created. Select both the Path and the TextBlock by clicking then while holding down CTRL and then Right Click them. Select Make Into User Control. Give it a name and then Build your project. Lets create another one using the Ellipse for the older comic book style of Speech Bubbles. Drag an Ellipse to the Artboard and give it a color. Now, grab the Pen and drag a triangle like I did below. Simply drag it over a corner of the Ellipse. Select Combine then Unite and you will have a Path. At this point, you can go ahead and add a TextBlock like we did earlier. Lets go ahead and create a rounded rectangle one by adding a Rectangle to the Artboard. Go ahead and set the RadiuX and RadiusY to 25 to give it rounded edges. Let’s create another path and drag it right on top of our rounded rectangle like we did earlier. …looking good Select Combine then Unite and you will have a Path. At this point, you can go ahead and add a TextBlock like we did earlier. So let’s look at what we’ve created today using the path element and TextBlock. As you can tell, it required more work but meets the requirements. This was actually fun to do and I encourage anyone that visits my blog to send in request like this.  Subscribe to my feed

    Read the article

  • thesis in LaTeX without memoir

    - by yCalleecharan
    Hi, I'm preparing a thesis report in LaTeX. I don't want to use the memoir class as I would like total control. I have two questions: My thesis consists of an introduction and then a series of articles pasted one after the other. Thus each article has its own abstract, introduction, etc... and a bibliography at its end. I understand that perhaps a package like multibib can help when we have multiple bibliographies. The idea that I came with is to write each article separately and then use pdfpages to combine the final document. Long time ago dvi files concatenation was popular but with graphics I think that pdfpages is better. I saw that another package exists, named combine which can help with what I am doing. I'm still exploring my options. Any suggestions? If I decide to use a book class instead to make my thesis resemble more into a book-like format which is better, then using the book class format gives me section headings like 0.1 Introduction with the leading 0. This 0 doesn't show in the article class format. How to change this default book class numbering format so that I'll have e.g. 1. Introduction? Thanks a lot...

    Read the article

  • Using Taylor Series to Avoid Loss of Precision

    - by Zachary
    I'm trying to use Taylor series to develop a numerically sound algorithm for solving a function. I've been at it for quite a while, but haven't had any luck yet. I'm not sure what I'm doing wrong. The function is f(x)=1 + x - sin(x)/ln(1+x) x~0 Also: why does loss of precision even occur in this function? when x is close to zero, sin(x)/ln(1+x) isn't even close to being the same number as x. I don't see where significance is even being lost. In order to solve this, I believe that I will need to use the Taylor expansions for sin(x) and ln(1+x), which are x - x^3/3! + x^5/5! - x^7/7! + ... and x - x^2/2 + x^3/3 - x^4/4 + ... respectfully. I have attempted to use like denominators to combine the x and sin(x)/ln(1+x) components, and even to combine all three, but nothing seems to work out correctly in the end. Any help is appreciated.

    Read the article

  • GTK#-related error on MonoDevelop 2.8.5 on Ubuntu 11.04

    - by Mehrdad
    When I try to create a new solution in MonoDevelop 2.8.5 in Ubuntu 11.04 x64, it shows me: System.ArgumentNullException: Argument cannot be null. Parameter name: path1 at System.IO.Path.Combine (System.String path1, System.String path2) [0x00000] in <filename unknown>:0 at MonoDevelop.Core.FilePath.Combine (System.String[] paths) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.ProjectCreateInformation.get_BinPath () [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.DotNetProject..ctor (System.String languageName, MonoDevelop.Projects.ProjectCreateInformation projectCreateInfo, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.DotNetAssemblyProject..ctor (System.String languageName, MonoDevelop.Projects.ProjectCreateInformation projectCreateInfo, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.DotNetProjectBinding.CreateProject (System.String languageName, MonoDevelop.Projects.ProjectCreateInformation info, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.DotNetProjectBinding.CreateProject (MonoDevelop.Projects.ProjectCreateInformation info, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Projects.ProjectService.CreateProject (System.String type, MonoDevelop.Projects.ProjectCreateInformation info, System.Xml.XmlElement projectOptions) [0x00000] in <filename unknown>:0 at MonoDevelop.Ide.Templates.ProjectDescriptor.CreateItem (MonoDevelop.Projects.ProjectCreateInformation projectCreateInformation, System.String defaultLanguage) [0x00000] in <filename unknown>:0 at MonoDevelop.Ide.Templates.ProjectTemplate.HasItemFeatures (MonoDevelop.Projects.SolutionFolder parentFolder, MonoDevelop.Projects.ProjectCreateInformation cinfo) [0x00000] in <filename unknown>:0 at MonoDevelop.Ide.Projects.NewProjectDialog.SelectedIndexChange (System.Object sender, System.EventArgs e) [0x00000] in <filename unknown>:0 I strace'd it and saw repeated failed accesses to files like: /usr/lib/mono/gac/gtk-sharp/2.12.0.0__35e10195dab3c99f/libgtk-x11-2.0.so.0.la so I'm assuming that's the cause of the problem. However, I've installed (and re-installed) anything GTK#-related that I could think of... and the error still occurs. Does anyone know how to fix it?

    Read the article

  • How do you build a Windows Workflow Project with NAnt 0.90?

    - by LockeCJ
    I'm trying to build a Windows Workflow (WF) project using NAnt, but it doesn;t seem to be able to build the ".xoml" and ".rules" files. Here is the code of the csc task that I'm using: <csc debug="${build.Debug}" warninglevel="${build.WarningLevel}" target="library" output="${path::combine(build.OutputDir,assembly.Name+'.dll')}" verbose="${build.Verbose}" doc="${path::combine(build.OutputDir,assembly.Name+'.xml')}"> <sources basedir="${assembly.BaseDir}"> <include name="**/*.cs" /> <include name="**/*.xoml" /> <include name="**/*.rules" /> </sources> <resources basedir="${assembly.BaseDir}"> <include name="**/*.xsd" /> <include name="**/*.resx" /> </resources> <references> ... </references> </csc> Here's the output: Compiling 21 files to 'c:\Output\MyWorkFlowProject.dll'. [csc] c:\Projects\MyWorkFlowProject\AProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\BProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\CProcessFlow.rules(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\CProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods

    Read the article

  • iText PDFReader Extremely Slow To Open

    - by Wbmstrmjb
    I have some code that combines a few pages of acro forms (with acrofields in tact) and then at the end writes some JS to the entire document. It is the PdfReader in the function adding the JS that is taking extremely long to instantiate (about 12 seconds for a 1MB file). Here is the code (pretty simple): public static byte[] AddJavascript(byte[] document, string js) { PdfReader reader = new PdfReader(new RandomAccessFileOrArray(document), null); MemoryStream msOutput = new MemoryStream(); PdfStamper stamper = new PdfStamper(reader, msOutput); PdfWriter writer = stamper.Writer; writer.AddJavaScript(js); stamper.Close(); reader.Close(); byte[] withJS = msOutput.GetBuffer(); return withJS; } I have benchmarked the above and the line that is slow is the first one. I have tried reading it from a file instead of memory and tried using a MemoryStream instead of the RandomAccessFileOrArray. Nothing makes it any faster. If I add JS to a single page document, it is very fast. So my thought is that the code that combines the pages is somehow making the PDF slow to read for the PdfReader. Here is the combine code: public static byte[] CombineFiles(List<byte[]> sourceFiles) { MemoryStream output = new MemoryStream(); PdfCopyFields copier = new PdfCopyFields(output); try { output.Position = 0; foreach (var fileBytes in sourceFiles) { PdfReader fileReader = new PdfReader(fileBytes); copier.AddDocument(fileReader); } } catch (Exception exception) { //throw } finally { copier.Close(); } byte[] concat = output.GetBuffer(); return concat; } I am using PdfCopyFields because I need to preserve the form fields and so cannot use the PdfCopy or PdfSmartCopy. This combine code is very fast (few ms) and produces working documents. The AddJS code above is called after it and the PdfReader open is the slow piece. Any ideas?

    Read the article

  • Guaranteed way to find the ildasm.exe and ilasm.exe files regardless of .NET version/environment?

    - by m-y
    Is there a way to programmatically get the FileInfo/Path of the ildasm.exe/ilasm.exe executables? I'm attempting to decompile and recompile a dll/exe file appropriately after making some alterations to it (I'm guessing PostSharp does something similar to alter the IL after the compilation). I found a blog post that pointed to: var pfDir = Environment.GetFolderPath(Environment.SpecialFolders.ProgramFiles)); var sdkDir = Path.Combine(pfDir, @"Microsoft SDKs\Windows\v6.0A\bin"); ... However, when I ran this code the directory did not exist (mainly because my SDK version is 7.1), so on my local machine the correct path is @"Microsoft SDKs\Windows\v7.1\bin". How do I ensure I can actually find the ildasm.exe? Similarly, I found another blog post on how to get access to ilasm.exe as: string windows = Environment.GetFolderPath(Environment.SpecialFolder.System); string fwork = Path.Combine(windows, @"..\Microsoft.NET\Framework\v2.0.50727"); ... While this works, I noticed that I have Framework and Framework64, and within Framework itself I have all of the versions up to v4.0.30319 (same with Framework64). So, how do I know which one to use? Should it be based on the .NET Framework version I'm targetting? Summary: How do I appropriately guarantee to find the correct path to ildasm.exe? How do I appropriately select the correct ilasm.exe to compile?

    Read the article

  • My sample app is getting crash while registering to Filechangeinfo notification

    - by Solitaire
    public partial class Form1 : Form { [DllImport("coredll.dll")] static extern int SetWindowLong(IntPtr hWnd, int nIndex, IntPtr dwNewLong); [DllImport("coredll.dll")] static extern IntPtr CallWindowProc(IntPtr lpPrevWndFunc, IntPtr hWnd, int Msg, IntPtr wParam, IntPtr lParam); [DllImport("coredll.dll")] public static extern IntPtr GetWindowLong(IntPtr hWnd, int nIndex); //public struct tagSHCHANGENOTIFYENTRY //{ // [MarshalAs(UnmanagedType.SysUInt)] // public ulong dwEventMask; // [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 4096)] // public string WatchDir; // [MarshalAs(UnmanagedType.Bool)] // public bool fRecursive; //} //tagSHCHANGENOTIFYENTRY test; //[DllImport("aygshell.dll")] //static extern bool SHChangeNotifyRegister(IntPtr hwnd, ref tagSHCHANGENOTIFYENTRY test); const int GWL_WNDPROC = -4; public delegate int WindProc(IntPtr hWnd, int msg, IntPtr Wparam, IntPtr lparam); static private WindProc SampleProc; IntPtr OldDefProc = IntPtr.Zero; public enum SHCNE : uint { SHCNE_RENAMEITEM = 0x00000001, SHCNE_CREATE = 0x00000002, SHCNE_DELETE = 0x00000004, SHCNE_MKDIR = 0x00000008, SHCNE_RMDIR = 0x00000010, SHCNE_MEDIAINSERTED = 0x00000020, SHCNE_MEDIAREMOVED = 0x00000040, SHCNE_DRIVEREMOVED = 0x00000080, SHCNE_DRIVEADD = 0x00000100, SHCNE_NETSHARE = 0x00000200, SHCNE_NETUNSHARE = 0x00000400, SHCNE_ATTRIBUTES = 0x00000800, SHCNE_UPDATEDIR = 0x00001000, SHCNE_UPDATEITEM = 0x00002000, SHCNE_SERVERDISCONNECT = 0x00004000, SHCNE_UPDATEIMAGE = 0x00008000, SHCNE_DRIVEADDGUI = 0x00010000, SHCNE_RENAMEFOLDER = 0x00020000, SHCNE_FREESPACE = 0x00040000, SHCNE_EXTENDED_EVENT = 0x04000000, SHCNE_ASSOCCHANGED = 0x08000000, SHCNE_DISKEVENTS = 0x0002381F, SHCNE_GLOBALEVENTS = 0x0C0581E0, SHCNE_ALLEVENTS = 0x7FFFFFFF, SHCNE_INTERRUPT = 0x80000000, } public enum SHCNF { SHCNF_IDLIST = 0x0000, SHCNF_PATHA = 0x0001, SHCNF_PRINTERA = 0x0002, SHCNF_DWORD = 0x0003, SHCNF_PATHW = 0x0005, SHCNF_PRINTERW = 0x0006, SHCNF_TYPE = 0x00FF, SHCNF_FLUSH = 0x1000, SHCNF_FLUSHNOWAIT = 0x2000 } public const uint WM_SHNOTIFY = 0x0401; private const int WM_FILECHANGEINFO = (0x8000 + 0x101); public struct SHChangeNotifyEntry { public IntPtr pIdl; [MarshalAs(UnmanagedType.Bool)] public Boolean Recursively; } [DllImport("coredll.dll", EntryPoint = "#2", CharSet = CharSet.Auto)] private static extern uint SHChangeNotifyRegister( IntPtr hWnd, SHCNF fSources, SHCNE fEvents, uint wMsg, int cEntries, ref SHChangeNotifyEntry pFsne); [DllImport("Ceshell.dll", CharSet = CharSet.Auto)] private static extern uint SHGetSpecialFolderLocation( IntPtr hWnd, CSIDL nFolder, out IntPtr Pidl); public enum CSIDL { /// <summary> /// Desktop /// </summary> CSIDL_DESKTOP = 0x0000, /// <summary> /// Internet Explorer (icon on desktop) /// </summary> CSIDL_INTERNET = 0x0001, /// <summary> /// Start Menu\Programs /// </summary> CSIDL_PROGRAMS = 0x0002, /// <summary> /// My Computer\Control Panel /// </summary> CSIDL_CONTROLS = 0x0003, /// <summary> /// My Computer\Printers /// </summary> CSIDL_PRINTERS = 0x0004, /// <summary> /// My Documents /// </summary> CSIDL_PERSONAL = 0x0005, /// <summary> /// user name\Favorites /// </summary> CSIDL_FAVORITES = 0x0006, /// <summary> /// Start Menu\Programs\Startup /// </summary> CSIDL_STARTUP = 0x0007, /// <summary> /// user name\Recent /// </summary> CSIDL_RECENT = 0x0008, /// <summary> /// user name\SendTo /// </summary> CSIDL_SENDTO = 0x0009, /// <summary> /// desktop\Recycle Bin /// </summary> CSIDL_BITBUCKET = 0x000a, /// <summary> /// user name\Start Menu /// </summary> CSIDL_STARTMENU = 0x000b, /// <summary> /// logical "My Documents" desktop icon /// </summary> CSIDL_MYDOCUMENTS = 0x000c, /// <summary> /// "My Music" folder /// </summary> CSIDL_MYMUSIC = 0x000d, /// <summary> /// "My Videos" folder /// </summary> CSIDL_MYVIDEO = 0x000e, /// <summary> /// user name\Desktop /// </summary> CSIDL_DESKTOPDIRECTORY = 0x0010, /// <summary> /// My Computer /// </summary> CSIDL_DRIVES = 0x0011, /// <summary> /// Network Neighborhood (My Network Places) /// </summary> CSIDL_NETWORK = 0x0012, /// <summary> /// user name>nethood /// </summary> CSIDL_NETHOOD = 0x0013, /// <summary> /// windows\fonts /// </summary> CSIDL_FONTS = 0x0014, CSIDL_TEMPLATES = 0x0015, /// <summary> /// All Users\Start Menu /// </summary> CSIDL_COMMON_STARTMENU = 0x0016, /// <summary> /// All Users\Start Menu\Programs /// </summary> CSIDL_COMMON_PROGRAMS = 0X0017, /// <summary> /// All Users\Startup /// </summary> CSIDL_COMMON_STARTUP = 0x0018, /// <summary> /// All Users\Desktop /// </summary> CSIDL_COMMON_DESKTOPDIRECTORY = 0x0019, /// <summary> /// user name\Application Data /// </summary> CSIDL_APPDATA = 0x001a, /// <summary> /// user name\PrintHood /// </summary> CSIDL_PRINTHOOD = 0x001b, /// <summary> /// user name\Local Settings\Applicaiton Data (non roaming) /// </summary> CSIDL_LOCAL_APPDATA = 0x001c, /// <summary> /// non localized startup /// </summary> CSIDL_ALTSTARTUP = 0x001d, /// <summary> /// non localized common startup /// </summary> CSIDL_COMMON_ALTSTARTUP = 0x001e, CSIDL_COMMON_FAVORITES = 0x001f, CSIDL_INTERNET_CACHE = 0x0020, CSIDL_COOKIES = 0x0021, CSIDL_HISTORY = 0x0022, /// <summary> /// All Users\Application Data /// </summary> CSIDL_COMMON_APPDATA = 0x0023, /// <summary> /// GetWindowsDirectory() /// </summary> CSIDL_WINDOWS = 0x0024, /// <summary> /// GetSystemDirectory() /// </summary> CSIDL_SYSTEM = 0x0025, /// <summary> /// C:\Program Files /// </summary> CSIDL_PROGRAM_FILES = 0x0026, /// <summary> /// C:\Program Files\My Pictures /// </summary> CSIDL_MYPICTURES = 0x0027, /// <summary> /// USERPROFILE /// </summary> CSIDL_PROFILE = 0x0028, /// <summary> /// x86 system directory on RISC /// </summary> CSIDL_SYSTEMX86 = 0x0029, /// <summary> /// x86 C:\Program Files on RISC /// </summary> CSIDL_PROGRAM_FILESX86 = 0x002a, /// <summary> /// C:\Program Files\Common /// </summary> CSIDL_PROGRAM_FILES_COMMON = 0x002b, /// <summary> /// x86 Program Files\Common on RISC /// </summary> CSIDL_PROGRAM_FILES_COMMONX86 = 0x002c, /// <summary> /// All Users\Templates /// </summary> CSIDL_COMMON_TEMPLATES = 0x002d, /// <summary> /// All Users\Documents /// </summary> CSIDL_COMMON_DOCUMENTS = 0x002e, /// <summary> /// All Users\Start Menu\Programs\Administrative Tools /// </summary> CSIDL_COMMON_ADMINTOOLS = 0x002f, /// <summary> /// user name\Start Menu\Programs\Administrative Tools /// </summary> CSIDL_ADMINTOOLS = 0x0030, /// <summary> /// Network and Dial-up Connections /// </summary> CSIDL_CONNECTIONS = 0x0031, /// <summary> /// All Users\My Music /// </summary> CSIDL_COMMON_MUSIC = 0x0035, /// <summary> /// All Users\My Pictures /// </summary> CSIDL_COMMON_PICTURES = 0x0036, /// <summary> /// All Users\My Video /// </summary> CSIDL_COMMON_VIDEO = 0x0037, /// <summary> /// Resource Direcotry /// </summary> CSIDL_RESOURCES = 0x0038, /// <summary> /// Localized Resource Direcotry /// </summary> CSIDL_RESOURCES_LOCALIZED = 0x0039, /// <summary> /// Links to All Users OEM specific apps /// </summary> CSIDL_COMMON_OEM_LINKS = 0x003a, /// <summary> /// USERPROFILE\Local Settings\Application Data\Microsoft\CD Burning /// </summary> CSIDL_CDBURN_AREA = 0x003b, /// <summary> /// Computers Near Me (computered from Workgroup membership) /// </summary> CSIDL_COMPUTERSNEARME = 0x003d, /// <summary> /// combine with CSIDL_ value to force folder creation in SHGetFolderPath() /// </summary> CSIDL_FLAG_CREATE = 0x8000, /// <summary> /// combine with CSIDL_ value to return an unverified folder path /// </summary> CSIDL_FLAG_DONT_VERIFY = 0x4000, /// <summary> /// combine with CSIDL_ value to insure non-alias versions of the pidl /// </summary> CSIDL_FLAG_NO_ALIAS = 0x1000, /// <summary> /// combine with CSIDL_ value to indicate per-user init (eg. upgrade) /// </summary> CSIDL_FLAG_PER_USER_INIT = 0x0800, /// <summary> /// mask for all possible /// </summary> CSIDL_FLAG_MASK = 0xFF00, } public enum SHGetFolderLocationReturnValues : uint { /// <summary> /// Success /// </summary> S_OK = 0x00000000, /// <summary> /// The CSIDL in nFolder is valid but the folder does not exist /// </summary> S_FALSE = 0x00000001, /// <summary> /// The CSIDL in nFolder is not valid /// </summary> E_INVALIDARG = 0x80070057 } public static IntPtr GetPidlFromFolderID(IntPtr hWnd, CSIDL Id) { IntPtr pIdl = IntPtr.Zero; SHGetFolderLocationReturnValues res = (SHGetFolderLocationReturnValues) SHGetSpecialFolderLocation( hWnd, Id, out pIdl); return (pIdl); } public Form1() { InitializeComponent(); SampleProc = new WindProc (SubclassWndProc); OldDefProc = GetWindowLong(this.Handle, GWL_WNDPROC); SetWindowLong(this.Handle, GWL_WNDPROC, Marshal.GetFunctionPointerForDelegate(SampleProc)/*SampleProc.Method.MethodHandle.Value.ToInt32()*/); //tagSHCHANGENOTIFYENTRY changeentry = new tagSHCHANGENOTIFYENTRY(); //changeentry.dwEventMask = (ulong)SHCNE.SHCNE_ALLEVENTS; //changeentry.fRecursive = true; //changeentry.WatchDir = null; //SHChangeNotifyRegister(this.Handle, ref changeentry); SHChangeNotifyEntry changeentry = new SHChangeNotifyEntry(); changeentry.pIdl = GetPidlFromFolderID(this.Handle, CSIDL.CSIDL_DESKTOP); changeentry.Recursively = true; try { uint notifyid = SHChangeNotifyRegister( this.Handle, SHCNF.SHCNF_TYPE | SHCNF.SHCNF_IDLIST, SHCNE.SHCNE_ALLEVENTS, WM_FILECHANGEINFO, 1, ref changeentry); } catch (Exception ee) { } i am failing in SHChangeNotifyRegister please help me.. tell me the reason why i am crashing..same code work fine for desktop.. please help Thanks.

    Read the article

  • R + Bioconductor : combining probesets in an ExpressionSet

    - by Mike Dewar
    Hi, First off, this may be the wrong Forum for this question, as it's pretty darn R+Bioconductor specific. Here's what I have: library('GEOquery') GDS = getGEO('GDS785') cd4T = GDS2eSet(GDS) cd4T <- cd4T[!fData(cd4T)$symbol == "",] Now cd4T is an ExpressionSet object which wraps a big matrix with 19794 rows (probesets) and 15 columns (samples). The final line gets rid of all probesets that do not have corresponding gene symbols. Now the trouble is that most genes in this set are assigned to more than one probeset. You can see this by doing gene_symbols = factor(fData(cd4T)$Gene.symbol) length(gene_symbols)-length(levels(gene_symbols)) [1] 6897 So only 6897 of my 19794 probesets have unique probeset - gene mappings. I'd like to somehow combine the expression levels of each probeset associated with each gene. I don't care much about the actual probe id for each probe. I'd like very much to end up with an ExpressionSet containing the merged information as all of my downstream analysis is designed to work with this class. I think I can write some code that will do this by hand, and make a new expression set from scratch. However, I'm assuming this can't be a new problem and that code exists to do it, using a statistically sound method to combine the gene expression levels. I'm guessing there's a proper name for this also but my googles aren't showing up much of use. Can anyone help?

    Read the article

  • Azure : The process cannot access the file "" because it is being used by another process.

    - by Shantanu
    Hi all, I am trying to get a matlab-compiled exe running on Azure cloud, and for that purpose need to get a v78.zip onto the local storage of the cloud and unzip it, before I can try to run an exe on the cloud. The program works fine when executed locally, but on deployment gives and error at line marked below in the code. The error is : The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. Exception Details: System.IO.IOException: The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. The code is given below: string localPath = RoleEnvironment.GetLocalResource("LocalStorage1").RootPath; Response.Write(localPath + " \n"); Directory.SetCurrentDirectory(localPath); CloudBlob mblob = GetProgramContainer().GetBlobReference("v78.zip"); CloudBlockBlob mbblob = mblob.ToBlockBlob; CloudBlob zipblob = GetProgramContainer().GetBlobReference("7z.exe"); string zipPath = Path.Combine(localPath, "7z.exe"); string matlabPath = Path.Combine(localPath, "v78.zip"); IEnumerable<ListBlockItem> blocklist = mbblob.DownloadBlockList(); BlobStream stream = mbblob.OpenRead(); FileStream fs = File.Create(matlabPath); (Exception occurs here) It'll be great help if someone could tell me where I'm going wrong. Thanks! Shan

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >