Search Results

Search found 19135 results on 766 pages for 'fn key'.

Page 183/766 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Update on People Tools

    Jeff Robbins, Senior Director of PeopleTools Strategy for Oracle updates listeners on they key features of the most recent PeopleTools release and the key enhancements that are coming up in the next release.

    Read the article

  • Poner aplicaci&oacute;n Asp.Net en modo OFFLINE

    - by Jason Ulloa
    Una de las opciones que todo aplicación debería tener es el poder ponerse en modo OFFLINE para evitar el acceso de usuarios. Esto es completamente necesario cuando queremos realizar cambios a nuestra aplicación (cambiar algo, poner una actualización, etc) o a nuestra base de datos y evitarnos problemas con los usuarios que se encuentren logueados dentro de la aplicación en ese momento. Muchos ejemplos a través de la Web exponen la forma de realizar esta tarea utilizando dos técnicas: 1. La primera de ellas es utilizar el archivo App_Offline.htm sin embargo, esta técnica tiene un inconveniente. Y es que, una vez que hemos subido el archivo a nuestra aplicación esta se bloquea completamente y no tenemos forma de volver a ponerla ONLINE a menos que eliminemos el archivo. Es decir no podemos controlarla. 2. La segunda de ellas es el utilizar la etiqueta httpRuntime, pero nuevamente tenemos el mismo problema. Al habilitar el modo OFFLINE mediante esta etiqueta, tampoco podremos acceder a un modo de administración para cambiarla. Un ejemplo de la etiqueta httpRuntime <configuration> <system.web> <httpRuntime enable="false" /> </system.web> </configuration>   Tomando en cuenta lo anterior, lo mas optimo seria que podamos por medio de alguna pagina de administración colocar nuestro sitio en modo OFFLINE, pero manteniendo el acceso a la pagina de administración para poder volver a cambiar el valor que pondrá nuestra aplicación nuevamente en modo ONLINE. Para ello, utilizaremos el web.config de nuestra aplicación y una pequeña clase que se encargara de Leer y escribir los valores. Lo primero será, abrir nuestro web.config y definir dentro del appSettings dos nuevas KEY que contendrán los valores para el modo OFFLINE de nuestra aplicación: <appSettings> <add key="IsOffline" value="false" /> <add key="IsOfflineMessage" value="Sistema temporalmente no disponible por tareas de mantenimiento." /> </appSettings>   En las KEY anteriores tenemos el IsOffLine con value de false, esto es para indicarle a nuestra aplicación que actualmente su modo de funcionamiento es ONLINE, este valor será el que posteriormente cambiemos a TRUE para volver al modo OFFLINE. Nuestra segunda KEY (IsOfflineMessage) posee el value (Sistema temporalmente….) que será mostrado al usuario como un mensaje cuando el sitio este en modo OFFLINE. Una vez definidas nuestras dos KEY en el web.config, escribiremos una clase personalizada para leer y escribir los valores. Así que, agregamos un nuevo elemento de tipo clase al proyecto llamado SettingsRules y la definimos como Public. Está clase contendrá dos métodos, el primero será para leer los valores: public string readIsOnlineSettings(string sectionToRead) { Configuration cfg = WebConfigurationManager.OpenWebConfiguration(System.Web.Hosting.HostingEnvironment.ApplicationVirtualPath); KeyValueConfigurationElement isOnlineSettings = (KeyValueConfigurationElement)cfg.AppSettings.Settings[sectionToRead]; return isOnlineSettings.Value; }   El segundo método, será el encargado de escribir los nuevos valores al web.config public bool saveIsOnlineSettings(string sectionToWrite, string value) { bool succesFullySaved;   try { Configuration cfg = WebConfigurationManager.OpenWebConfiguration(System.Web.Hosting.HostingEnvironment.ApplicationVirtualPath); KeyValueConfigurationElement repositorySettings = (KeyValueConfigurationElement)cfg.AppSettings.Settings[sectionToWrite];   if (repositorySettings != null) { repositorySettings.Value = value; cfg.Save(ConfigurationSaveMode.Modified); } succesFullySaved = true; } catch (Exception) { succesFullySaved = false; } return succesFullySaved; }   Por último, definiremos en nuestra clase una región llamada instance, que contendrá un método encargado de devolver una instancia de la clase (esto para no tener que hacerlo luego) #region instance   private static SettingsRules m_instance;   // Properties public static SettingsRules Instance { get { if (m_instance == null) { m_instance = new SettingsRules(); } return m_instance; } }   #endregion instance   Con esto, nuestra clase principal esta completa. Así que pasaremos a la implementación de las páginas y el resto de código que completará la funcionalidad.   Para complementar la tarea del web.config utilizaremos el fabuloso GLOBAL.ASAX, este contendrá el código encargado de detectar si nuestra aplicación tiene el valor de ONLINE o OFFLINE y además de bloquear todas las paginas y directorios excepto el que le hayamos definido como administrador, esto para luego poder volver a configurar el sitio.   El evento del Global.Asax que utilizaremos será el Application_BeginRequest   protected void Application_BeginRequest(Object sender, EventArgs e) {   if (Convert.ToBoolean(SettingsRules.Instance.readIsOnlineSettings("IsOffline"))) {   string Virtual = Request.Path.Substring(0, Request.Path.LastIndexOf("/") + 1);   if (Virtual.ToLower().IndexOf("/admin/") == -1) { //We don't makes action, is admin section Server.Transfer("~/TemporarilyOfflineMessage.aspx"); }   } } La primer Línea del IF, verifica si el atributo del web.config es True o False, si es true toma la dirección WEB que se ha solicitado y la incluimos en un IF para verificar si corresponde a la Sección admin (está sección no es mas que un folder en nuestra aplicación llamado admin y puede ser cambiado a cualquier otro). Si el resultado de ese if es –1 quiere decir que no coincide, entonces, esa será la bandera que nos permitirá bloquear inmediatamente la pagina actual, transfiriendo al usuario a una pagina de mantenimiento. Ahora, en nuestra carpeta Admin crearemos una nueva pagina asp.net llamada OnlineSettings.aspx para actualizar y leer los datos del web.config y una pagina Default.aspx para pruebas. Nuestra página OnlineSettings tendrá dos pasos importantes: 1. Leer los datos actuales de configuración protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { IsOffline.Checked = Convert.ToBoolean(mySettings.readIsOnlineSettings("IsOffline")); OfflineMessage.Text = mySettings.readIsOnlineSettings("IsOfflineMessage"); } }   2. Actualizar los datos con los nuevos valores. protected void UpdateButton_Click(object sender, EventArgs e) { string htmlMessage = OfflineMessage.Text.Replace(Environment.NewLine, "<br />");   // Update the Application variables Application.Lock(); if (IsOffline.Checked) { mySettings.saveIsOnlineSettings("IsOffline", "True"); mySettings.saveIsOnlineSettings("IsOfflineMessage", htmlMessage); } else { mySettings.saveIsOnlineSettings("IsOffline", "false"); mySettings.saveIsOnlineSettings("IsOfflineMessage", htmlMessage); }   Application.UnLock(); }   Por último en la raíz de la aplicación, crearemos una nueva página aspx llamada TemporarilyOfflineMessage.aspx que será la que se muestre cuando se bloquee la aplicación. Al final nuestra aplicación se vería algo así Página bloqueada Configuración del Bloqueo Y para terminar la aplicación de ejemplo

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • Oracle WebCenter in Action: Best Practices from Oracle Consulting

    - by Kellsey Ruppel
    Oracle WebCenter in Action: Best Practices from Oracle ConsultingSee concrete, real-world examples of deployments throughout the Oracle WebCenter stack. Oracle Consulting will lead you through a discussion about best practices and key customer use cases, as well as offer practical tips to support web experience management, enterprise content management, and portal deployments.Watch this webcast as our presenters discuss: Best practices for deployments of large complex architectures with Oracle WebCenter Sites Key deployments and helpful hints for Oracle WebCenter Content Performance tuning takeaways when using Oracle WebCenter Portal Watch the webcast by registering now. REGISTER NOW

    Read the article

  • ASP.NET Horizontal Menu Control

    A few weeks ago, I was working on an ASP.NET web application and needed a simple horizontal menu with a submenu. I decided to use ASP.NET's Menu control: Just drag and drop the control on to the page. Simple enough, but the control did not provide access key and target window support on menu items. I have put together a tutorial on how to include an access key attribute, include a target attribute, and include a site map path.

    Read the article

  • Backlight Issue

    - by Shubham
    When I booted up my newly installed Ubuntu 11.04, I discovered, to my dismay, that the backlight was off by default. The keyboard shortcut (which is Fn+F6 on my Acer Aspire 4736) for turning on the backlight doesn't seem to work. I have been trying to resolve this problem for the past 2 days now, but with no success. By the way, the backlight did work properly once or twice at random - but the problem again popped up as soon as I restarted my system.

    Read the article

  • Announcing: General Availability of Demantra 7.3.1.4!

    - by user702295
    Announcing: General Availability of Demantra 7.3.1.4! This new release brings important usability upgrades and key requested customer enhancements. Key features released in Demantra 7.3.1.4: - Improved user interface - Improved mobile support - Embed Demantra-Anywhere in Advanced Planning Command Center - Aggregate work orders for Asset Intensive Planning Additionally: - Demantra 7.3.1.4 is certified with VCP 12.1.3.8 only. Availability via patch 14405087.

    Read the article

  • SQL Server 2008 R2: StreamInsight changes at RTM: Access to grouping keys via explicit typing

    - by Greg Low
    One of the problems that existed in the CTP3 edition of StreamInsight was an error that occurred if you tried to access the grouping key from within your projection expression. That was a real issue as you always need access to the key. It's a bit like using a GROUP BY in TSQL and then not including the columns you're grouping by in the SELECT clause. You'd see the results but not be able to know which results are which. Look at the following code: var laneSpeeds = from e in vehicleSpeeds group e...(read more)

    Read the article

  • RemoveAll Dictionary Extension Method

    - by João Angelo
    Removing from a dictionary all the elements where the keys satisfy a set of conditions is something I needed to do more than once so I implemented it as an extension method to the IDictionary<TKey, TValue> interface. Here’s the code: public static class DictionaryExtensions { /// <summary> /// Removes all the elements where the key match the conditions defined by the specified predicate. /// </summary> /// <typeparam name="TKey"> /// The type of the dictionary key. /// </typeparam> /// <typeparam name="TValue"> /// The type of the dictionary value. /// </typeparam> /// <param name="dictionary"> /// A dictionary from which to remove the matched keys. /// </param> /// <param name="match"> /// The <see cref="Predicate{T}"/> delegate that defines the conditions of the keys to remove. /// </param> /// <exception cref="ArgumentNullException"> /// dictionary is null /// <br />-or-<br /> /// match is null. /// </exception> /// <returns> /// The number of elements removed from the <see cref="IDictionary{TKey, TValue}"/>. /// </returns> public static int RemoveAll<TKey, TValue>( this IDictionary<TKey, TValue> dictionary, Predicate<TKey> match) { if (dictionary == null) throw new ArgumentNullException("dictionary"); if (match == null) throw new ArgumentNullException("match"); var keysToRemove = dictionary.Keys.Where(k => match(k)).ToList(); if (keysToRemove.Count == 0) return 0; foreach (var key in keysToRemove) { dictionary.Remove(key); } return keysToRemove.Count; } }

    Read the article

  • When clientTransferProhibited is off to transfer a domain name, couldn't the name be stolen?

    - by Cedric Martin
    I'd like to transfer a domain from one registrar (Key-systems) to another registrar (OVH). I don't really understand the procedure and I'm a bit confused... I read everywhere that clientTransferProhibited prevents people from stealing your domain name. Now apparently during the course of transferring my domain from Key-systems to OVH, I'll to change clientTransferProhibited so that the transfer is allowed. Wouldn't my domain then become "stealable" during some amount of time? (a few hours / days / week)

    Read the article

  • What advantages do we have when creating a separate mapping table for two relational tables

    - by Pankaj Upadhyay
    In various open source CMS, I have noticed that there is a separate table for mapping two relational tables. Like for categories and products, there is a separate product_category_mapping table. This table just has a primary key and two foreign keys from the categories and product tables. My question is what are the benefits of this database design rather than just linking the tables directly by defining a foreign key in either table? Is it just matter of convenience?

    Read the article

  • Naming conventions for language file keys

    - by VirtuosiMedia
    What is your strategy for naming conventions for the keys in language files used for localization? We have a team that is going to conversion of a project to multiple languages and would like to have some guidelines to follow. As an example, usually the files end up being a series of key/value pairs, with the key being the placeholder in the template for the language specific value. 'Username': 'Username', 'Enter Username': 'Enter your username here'

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • How to enable hard-blocked bluetooth in Thinkpad Edge 320

    - by Non
    I'm trying to use the built-in bluetooth device of my Lenovo Thinkpad E320. It seems to be hard blocked, but i can't find any possibility to unblock it. rfkill list returns: 0: tpacpi_bluetooth_sw: Bluetooth Soft blocked: yes Hard blocked:yes cat /proc/acpi/ibm/bluetooth returns: status: disabled commands: enable, disable I tried to enable it by: Pressing Fn+F9 (Radio controll) echo enable | tee /proc/acpi/ibm/bluetooth rfkill unblock bluetooth Trough the BIOS. But it's not mentioned at all None of the actions influenced the ouputs above.

    Read the article

  • Cannot change brightness on an Acer Aspire 5745-dg

    - by pete-maister
    I have an acer aspire 5745dg laptop and i have also a very serious problem!! My brightness is always at maximun and I cannot change it with the fn keys or from the power management and my battery discharging fast:/. Thats my problem and i need your help guys!!!!In my windows partition brightness works perfectly only in ubuntu I have this serious and annoying problem!!!!Please reply to my message i'd appreciate that!

    Read the article

  • Why is it necessary to install EFI/rEFInd/UEFI/... on a SD Card since the Macbook Pro seems to already have it?

    - by user170794
    Dear askubuntu members, I own a Macbook Pro (late 2009) and when I boot the laptop and hold the alt key meanwhile, there is a EFI screen, so EFI is installed on... the firmware? I had a few troubles with my hard disk, so I had to change it, but I haven't installed OS X, I have only installed Ubuntu and still the EFI screen is there which is surely a good thing. As the new hard disk is making troubles again, I am using Puppy Linux, booting from a CD each time, which is unconfortable. So I am trying to have Ubuntu installed on a SD Card. After having spent many months on the internet grabing informations anywhere I can and trying several things, I applied this method: http://www.weihermueller.de/mac/ I succeeded in making one SD Card recognizable by the EFI of my laptop (holding alt key @ boot), but nothing installed on it yet as I fear to lose the recognizable-by-EFI part. I haven't succeded in producing the same result on another SD Card. I have a bootable USB key of Ubuntu (yipee) which works like a live CD, made with the help of Universal Linux UDF Creator, found there: http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/ on which I have put Ubuntu 13.04 64bit, retrieved from the official deposits. Eventhough I have to add the "nouveau.noaccel=1" option to the grub command line launching Linux, it works (yipee again) properly as a live cd. When installing Ubuntu I come across the "where do I wanna put Ubuntu" window, I partition another SD Card in: the EFI part (40MB) the Linux part (15GB< <16GB) The installation works fine and finishes with no problem. But at the reboot, the SD Card where Linux is installed is not recognized by the EFI, the icons are : the CD (Puppy Linux), the USB stick (from Linux UDF Creator), the hard drive (the formerly-working Ubuntu 12) but no fourth icon of the SD Card whatsoever. As the title of this thread suggests, I am wondering: why there is a need for EFI to be installed on the SD Card since EFI seems to be on my laptop anyway? why EFI has to be on a different partition than the Linux's one? How do both parts communicate? why the EFI part on the SD Card made with the help of the live-USB key isn't recognized? on the EFI partition, there is a folder named "EFI" which contains another folder named "ubuntu" which contains a file named "grubx64.efi", why is there a thing called grub? Is it the Linux's grub where one can chose either to boot, to boot in safe mode, etc.? Thank you for your patience, looking forward for any kind of answer, Julien

    Read the article

  • How to enable bluetooth when there is no hardware switch

    - by Robert Mutke
    My laptop Thinkpad Edge e320 got out of standby with bluetooth and wireless disabled. WiFi got enabled normally in unity but bluetooth says that "bluetooth is disabled by hardware switch". There is no hw switch on my laptop. I tried: # echo 1 > /sys/devices/platform/thinkpad_acpi/bluetooth_enable bash: echo: write error: Operation not permitted but as you see no results. Fn-F9, which is radio control, does not work. Any help?

    Read the article

  • Idera SQL Doctor 3.0 and MS SQL Changes

    New features worth mentioning in SQL doctor 3.0 begin with a new server dashboard that not only gives a comprehensive overview of a SQL Server instance's current health, but also several key details to help database administrators. Some of the details include recommendations on how to optimize server configuration, how to fix certain security issues, and how to get rid of performance bottlenecks. The latest version of SQL doctor also supplies users with key server information. The status of system parameters known to affect SQL Server performance, such as processes, disk partitions, cache, m...

    Read the article

  • Brightness doesn't change on Sony laptop

    - by Eng.amr19
    I installed ubuntu 12.04 on my laptop for first time and every thing work well but the only thing doesn't work is the brightness doesn't change when i press F5 and F6 with Fn So i can't adjust the Brightness and sometimes i need to decrease it for more time working on battery and the bubble appear that it changed but nothing happen my laptop is Sony VAIO series F with NVIDIA GEFORCE ... and NVADIA accelerated graphic driver is activated

    Read the article

  • Wireless is disabled by hardware switch on Dell Inspiron 1750

    - by lowerkey
    I have a problem where Ubuntu (12.04) says there is a wireless network card, but it is disabled by a hardware switch. How do I turn it on? I checked the BIOS, and the wireless card is enabled there. Fn + F2 was also no success. The results of rfkill list: 0: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: yes 1: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes sudo rfkill unblock wifi did nothing to the Wireless Status.

    Read the article

  • How do I get keyboard to write hiragana instead of katakana?

    - by Lisandro
    I added the Japanese Layout in Keyboard preferences, however all the layouts look like being in katakana. I suspect that the key that is left of number 1 (above Tab and under Esc) could be the one to switch from katakana to hiragana however I don't have that key in my keyboard and my other Toshiba laptop does not have it either. I really don't know what to do, I simply want to be able to write in hiragana in Ubuntu. Thanks

    Read the article

  • 2D Animation Smoothness - Delta time vs. Kinematics

    - by viperld002
    I'm animating a sprite in 2D with key frames of rotation and xy-positions. I've recently had a discussion with someone saying that when the device (happens to be an iPad using cocos2D) hits a performance bump due to whatever else the user may be doing, lag will arise and that the best way to fight it is to not use actual positions, but velocities, accelerations and torques with kinematics. His message is to evaluate the positions and rotations from these speeds at the current point in time. I've never experienced a situation where I've heard of using kinematics to stem lag in 2D animations and am not sure of how effective it could be. Also, it seems to be overkill. The application is not networked so it's all running on a local device. The desired effect is that the animation always plays as closely as it can to the target frame rate. Wouldn't the technique suffer the same problems as just using the time since the last frame or a fixed time step since the kinematics would also require some time value to perform the calculation? What techniques could you suggest to best achieve the desired effect? EDIT 1 Thank you for your responses, they are very illuminating. I want to clarify my question before choosing an answer however, to make sure that this post really serves it's purpose. I have a sprite of a ball, and a text file with 3 arrays worth of information (rotation,translations x, translations y) with each unit of information existing as a key frame to be stepped through (0 to 49 and back to 0 to replay it again). I have this playing by interpolating from the current key frame to the next, every n-units of time. The animation is visibly correct when compared to a video I was given of it, and it is smooth because of the interpolations between the key frames. This is the existing state of the project. There are no physics simulated, only a static animation of a ball moving in a way an artist specifically designed. Should I, instead of rotation in degrees and translations by positions in space, derive velocities, accelerations and torques to express this static animation as a function of time? As in, position now = foo(time now), where foo uses kinematics.

    Read the article

  • I can't modify the screen brightness on Xubuntu 13.10

    - by linuxer
    I got this problem with brightness on my fresh xubuntu 13.10 install.. It ain't inc/decrease the screen brightness when pressing Fn+F5/F6 !! I've tried some solutions like setting these values in the /etc/default/grub file and sudo updating grub (this worked for me on my previous xubuntu 13.04 install) GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=Linux acpi_backlight=vendor" GRUB_CMDLINE_LINUX="acpi_osi=Linux acpi_backlight=vendor" But still not working though :/ I got a Sony VAIO VPCEJ3B1E running xubuntu 13.10 x64bit and an NVidia GeForce 410M and using the propritory drivers 319 Thanks in advace!

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >