Search Results

Search found 9599 results on 384 pages for 'delete overload'.

Page 350/384 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Minidlna Directory Issues

    - by Somnambulist
    I've done my searching and can't find an answer to THIS specific issue. I have my minidlna set up and running - but it's not really done properly. First off, when I open the server on my bluray player, all of my movies are listed twice - when they are certainly not saved on my external twice. Second, when I open the server - rather than reading "Movies" "TV" "Music", etc - It just mashes all of my movies, tv, and some other folders all together with no real organization. I never had this problem when I had my Windows set up, so I know it's something configured improperly more-so than my external drive giving me gruff. Here's my minidlna.conf file: # This is the configuration file for the MiniDLNA daemon, a DLNA/UPnP-AV media # server. # # Unless otherwise noted, the commented out options show their default value. # # On Debian, you can also refer to the minidlna.conf(5) man page for # documentation about this file. media_dir=/media/somnambulist/Ghost In You # This option can be specified more than once if you want multiple directories # scanned. # # If you want to restrict a media_dir to a specific content type, you can # prepend the directory name with a letter representing the type (A, P or V), # followed by a comma, as so: # * "A" for audio (eg. media_dir=A,/var/lib/minidlna/music) # * "P" for pictures (eg. media_dir=P,/var/lib/minidlna/pictures) # * "V" for video (eg. media_dir=V,/var/lib/minidlna/videos) # # WARNING: After changing this option, you need to rebuild the database. Either # run minidlna with the '-R' option, or delete the 'files.db' file # from the db_dir directory (see below). # On Debian, you can run, as root, 'service minidlna force-reload' instead. #media_dir=/var/lib/minidlna media_dir=V,/media/somnambulist/Ghost In You/Movies media_dir=V,/media/somnambulist/Ghost In You/TV media_dir=P,/home/somnambulist/Pictures # Path to the directory that should hold the database and album art cache. db_dir=/home/somnambulist/serverart # Path to the directory that should hold the log file. log_dir=/home/somnambulist/serverlog # Minimum level of importance of messages to be logged. # Must be one of "off", "fatal", "error", "warn", "info" or "debug". # "off" turns of logging entirely, "fatal" is the highest level of importance # and "debug" the lowest. #log_level=warn # Use a different container as the root of the directory tree presented to # clients. The possible values are: # * "." - standard container # * "B" - "Browse Directory" # * "M" - "Music" # * "P" - "Pictures" # * "V" - "Video" # if you specify "B" and client device is audio-only then "Music/Folders" will be used as root root_container=B # Network interface(s) to bind to (e.g. eth0), comma delimited. #network_interface= # IPv4 address to listen on (e.g. 192.0.2.1). #listening_ip= # Port number for HTTP traffic (descriptions, SOAP, media transfer). port=8200 # URL presented to clients. # The default is the IP address of the server on port 80. #presentation_url=http://example.com:80 # Name that the DLNA server presents to clients. friendly_name=Somnambulist Media Server # Serial number the server reports to clients. serial=12345678 # Model name the server reports to clients. #model_name=Windows Media Connect compatible (MiniDLNA) # Model number the server reports to clients. model_number=1 # Automatic discovery of new files in the media_dir directory. #inotify=yes # List of file names to look for when searching for album art. Names should be # delimited with a forward slash ("/"). album_art_names=Cover.jpg/cover.jpg/AlbumArtSmall.jpg/albumartsmall.jpg/AlbumArt.jpg/albumart.jpg/Album.jpg/album.jpg/Folder.jpg/folder.jpg/Thumb.jpg/thumb.jpg # Strictly adhere to DLNA standards. # This allows server-side downscaling of very large JPEG images, which may # decrease JPEG serving performance on (at least) Sony DLNA products. #strict_dlna=no # Support for streaming .jpg and .mp3 files to a TiVo supporting HMO. #enable_tivo=no # Notify interval, in seconds. #notify_interval=895 # Path to the MiniSSDPd socket, for MiniSSDPd support. #minissdpdsocket=/run/minissdpd.sock` And here's the error I get in terminal when I run: sudo service minidlna restart sudo service minidlna force-reload Force restart error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] Force-reload error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] rm: cannot remove ‘/home/somnambulist/serverart/files.db’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Slumdog Millionaire/Slumdog.Millionaire.Cover.jpg’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Zack and Miri Make a Porno/ZackAndMiriMakeAPornoCover.jpg’: Permission denied [2013/08/12 21:19:46] minidlna.c:744: warn: Failed to clean old file cache. [ OK ] I've spent hours on this at this point, read through various files - and even had a friend who is relatively Ubuntu-savvy try to help me via chat - no such luck. Thanks in advance for any help.

    Read the article

  • Oracle Database 12c: Oracle Multitenant Option

    - by hamsun
    1. Why ? 2. What is it ? 3. How ? 1. Why ? The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases. 2. What ? Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes. The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views). There are local users, which are known in specific pluggable databases and common users known in all containers. Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery. 3. How ? Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state: CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE …. And of course this can also be achieved through Enterprise Manager Cloud. A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases. There are 4 ways to create other pluggable databases: 1. Create an empty pdb from seed 2. Plug in a non-CDB 3. Move a pdb from another pdb 4. Copy a pdb from another pdb We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available : 1. Create an empty pdb and use Datapump in traditional export/import mode or with Transportable Tablespace or Database mode. This method is suitable for pre 12c databases. 2. Create an empty pdb and use GoldenGate replication. When the pdb catches up with the non-CDB, you fail over to the pdb. 3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package. This is a demonstration for method 3: Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database. The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles. Step 2: Check if the non-CDB is pluggable in the multitenant container database: Step 3: Create the pluggable database, connected to the Multitenant container database. With nocopy option the files will be reused, but the tempfile is created anew: A service is created and registered automatically with the listener: Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace: To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb. Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes. Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode. Step 6: Verify tablespaces and users: There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl. This method of creating plugged_in non_CDB from is fast and easy for 12c databases. The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database. About the Author: Gerlinde has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

    Read the article

  • Record and Play your WebLogic Console Tasks Like a DVR

    - by james.bayer
    Automation using WebLogic Scripting Tool Today on the Oracle internal mailing list for WebLogic Server questions someone asked how to automate the configuration of the thread model for WebLogic Server and they were having trouble with the jython scripting syntax.  I’ve previously written about this feature called Work Managers and the associated constraints.  However, I did not show how to automate the process of configuring this without the console using WebLogic Scripting Tool – the jython scripting automation environment abbreviated as WLST.  I’ve written some very basic introductions to WLST before and there is also an Oracle By Example on the subject, but this is a bit more advanced.  Fear not because there is a really easy-to-use feature of the WLS console that lets you “Record” user actions just like a DVR.  Using these recordings of the web-based console, you can easily create a script even if you are unfamiliar with the WLST syntax and API.  I’m a big fan of both DVR’s and automation as can be evidenced with this old Halloween picture taken during simpler times.  Obviously the Cast Away and The Big Labowski references show some age.  I was a big Tivo fan-boy back in the day and I still think it’s the best DVR. I strongly believe that WebLogic Scripting Tool (WLST) is an absolutely essential tool for automating administration tasks in anything beyond a development environment.  Even in development environments you can make a case that it makes sense to start the automation for environments downstream.  I promise you that once you start using it for any tasks that you do even semi-regularly, you won’t go back to clicking through the console.  It’s simply so much more efficient and less error-prone to run a script. Let’s say you need to create a Work Manager and MaxThreadsConstraint – the easy way to do it is configure it in the WLS console first while capturing the commands with a recording.  See the images for the simple steps – click to enlarge. Record Console Configurations to a File Review the Recordings and Make Slight Modifications In order to make the recorded .py file directly callable as a stand-alone script I added calls to the connect() and edit() functions at the beginning and calls to disconnect() and exit() at the end – otherwise the main section of the script was provided by the console recording.  Below is the resulting file I saved as d:/temp/wm.py connect('weblogic','welcome1', 't3://localhost:7001') edit() startEdit()   cd('/SelfTuning/wl_server') cmo.createMaxThreadsConstraint('MaxThreadsConstraint-0')   cd('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName)) cmo.setCount(5) cmo.unSet('ConnectionPoolName')   cd('/SelfTuning/wl_server') cmo.createWorkManager('WorkManager-0') cd('/SelfTuning/wl_server/WorkManagers/WorkManager-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName))   cmo.setMaxThreadsConstraint(getMBean('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0')) cmo.setIgnoreStuckThreads(false)   activate() disconnect() exit() Run the Script If you want to test it be sure to delete the Work Manager and MaxThreadConstraint that you had previously created in the console.  Do something like the following - set up the environment and tell WLST to execute the script which happens in the first 2 lines, the rest doesn’t require any user input: D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server\bin>setDomainEnv.cmd D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server>java weblogic.WLST d:\temp\wm.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'examplesServer' that belongs to domain 'wl_server'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to edit tree. This is a writable tree with DomainMBean as the root. To make changes you will need to start an edit session via startEdit().   For more help, use help(edit)   Starting an edit session ... Started edit session, please be sure to save and activate your changes once you are done. Activating all your changes, this may take a while ... The edit lock associated with this edit session is released once the activation is completed. Activation completed Disconnected from weblogic server: examplesServer     Exiting WebLogic Scripting Tool.   Now if you go back and look in the console the changes have been made and we now have a compete script.  Of course there is a full MBean reference and you can learn the nuances of jython and WLST, but why not the WLS console do most of the work for you!  Happy scripting.

    Read the article

  • A Rose by Any Other Name..

    - by Geoff N. Hiten
    It is always a good start when you can steal a title line from one of the best writers in the English language.  Let’s hope I can make the rest of this post live up to the opening.  One recurring problem with SQL server is moving databases to new servers.  Client applications use a variety of ways to resolve SQL Server names, some of which are not changed easily <cough SharePoint /cough>.  If you happen to be using default instances on both the source and target SQL Server, then the solution is pretty simple.  You create (or bug the network admin until she creates) two DNS “A” records. One points the old name to the new IP address.  The other creates a new alias for the old server, since the original system name is now redirected.  Note this will redirect ALL traffic from the old server to the new server, including RDP and file share connection attempts.    Figure 1 – Microsoft DNS MMC Snap-In   Figure 2 – DNS New Host Dialog Box Both records are necessary so you can still access the old server via an alternate name. Server Role IP Address Name Alias Source 10.97.230.60 SQL01 SQL01_Old Target 10.97.230.80 SQL02 SQL01 Table 1 – Alias List If you or somebody set up connections via IP address, you deserve to have to go to each app and fix it by hand.  That is the only way to fix that particular foul-up. If have to deal with Named Instances either as a source or a target, then it gets more complicated.  The standard fix is to use the SQL Server Configuration Manager (or one of its earlier incarnations) to create a SQL client alias to redirect the connection.  This can be a pain installing and configuring the app on multiple client servers.  The good news is that SQL Server Configuration Manager AND all of its earlier versions simply write a few registry keys.  Extracting the keys into a .reg file makes centralized automated deployment a snap. If the client is a 32-bit system, you have to extract the native key.  If it is a 64-bit, you have to extract the native key and the WoW (32 bit on 64 bit host) key. First, pick a development system to create the actual registry key.  If you do this repeatedly, you can simply edit an existing registry file.  Create the entry using the SQL Configuration Manager.  You must use a 64-bit system to create the WoW key.  The following example redirects from a named instance “SQL01\SQLUtiluty” to a default instance on “SQL02”.   Figure 3 – SQL Server Configuration Manager - Native Figure 3 shows the native key listing. Figure 4 – SQL Server Configuration Manager – WoW If you think you don’t need the WoW key because your app is 64 it, think again.  SQL Server Management Server is a 32-bit app, as are most SQL test utilities.  Always create both keys for 64-bit target systems. Now that the keys exist, we can extract them into a .reg file. Fire up REGEDIT and browse to the following location:  HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo.  You can also search the registry for the string value of one of the server names (old or new). Right click on the “ConnectTo” label and choose “Export”.  Save with an appropriate name and location.  The resulting file should look something like this: Figure 5 – SQL01_Alias.reg Repeat the process with the location: HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo Note that if you have multiple alias entries, ALL of the entries will be exported.  In that case, you can edit the file and remove the extra aliases. You can edit the files together into a single file.  Just leave a blank line between new keys like this: Figure 6 – SQL01_Alias_All.reg Of course if you have an automatic way to deploy, it makes sense to have an automatic way to Un-deploy.  To delete a registry key, simply edit the .reg file and replace the target with a “-“ sign like so. Figure 7 – SQL01_Alias_UNDO.reg Now we have the ability to move any database to any server without having to install or change any applications on any client server.  The whole process should be transparent to the applications, which makes planning and coordinating database moves a far simpler task.

    Read the article

  • Tutorial: Criando um Componente para o UCM

    - by Denisd
    Então você já instalou o UCM, seguindo o tutorial: http://blogs.oracle.com/ecmbrasil/2009/05/tutorial_de_instalao_do_ucm.html e também já fez o hands-on: http://blogs.oracle.com/ecmbrasil/2009/10/tutorial_de_ucm.html e agora quer ir além do básico? Quer começar a criar funcionalidades para o UCM? Quer se tornar um desenvolvedor do UCM? Quer criar o Content Server à sua imagem e semelhança?! Pois hoje é o seu dia de sorte! Neste tutorial, iremos aprender a criar um componente para o Content Server. O nosso primeiro componente, embora não seja tão simples, será feito apenas com recursos do Content Server. Em um futuro tutorial, iremos aprender a usar classes java como parte de nossos componentes. Neste tutorial, vamos desenvolver um recurso de Favoritos, aonde os usuários poderão marcar determinados documentos como seus Favoritos, e depois consultar estes documentos em uma lista. Não iremos montar o componente com todas as suas funcionalidades, mas com o que vocês verão aqui, será tranquilo aprimorar este componente, inclusive para ambientes de produção. Componente MyFavorites Algumas características do nosso componente favoritos: - Por motivos de espaço, iremos montar este componente de uma forma “rápida e crua”, ou seja, sem seguir necessariamente as melhores práticas de desenvolvimento de componentes. Para entender melhor a prática de desenvolvimento de componentes, recomendo a leitura do guia Working With Components. - Ele será desenvolvido apenas para português-Brasil. Outros idiomas podem ser adicionados posteriormente. - Ele irá apresentar uma opção “Adicionar aos Favoritos” no menu “Content Actions” (tela Content Information), para que o usuário possa definir este arquivo como um dos seus favoritos. - Ao clicar neste link, o usuário será direcionado à uma tela aonde ele poderá digitar um comentário sobre este favorito, para facilitar a leitura depois. - Os favoritos ficarão salvos em uma tabela de banco de dados que iremos criar como parte do componente - A aba “My Content Server” terá uma opção nova chamada “Meus Favoritos”, que irá trazer uma tela que lista os favoritos, permitindo que o usuário possa deletar os links - Alguns recursos ficarão de fora deste exercício, novamente por motivos de espaço. Mas iremos listar estes recursos ao final, como exercícios complementares. Recursos do nosso Componente O componente Favoritos será desenvolvido com alguns recursos. Vamos conhecer melhor o que são estes recursos e quais são as suas funções: - Query: Uma query é qualquer atividade que eu preciso executar no banco, o famoso CRUD: Criar, Ler, Atualizar, Deletar. Existem diferentes jeitos de chamar a query, dependendo do propósito: Select Query: executa um comando SQL, mas descarta o resultado. Usado apenas para testar se a conexão com o banco está ok. Não será usado no nosso exercício. Execute Query: executa um comando SQL que altera informações do banco. Pode ser um INSERT, UPDATE ou DELETE. Descarta os resultados. Iremos usar Execute Query para criar, alterar e excluir os favoritos. Select Cache Query: executa um comando SQL SELECT e armazena os resultados em um ResultSet. Este ResultSet retorna como resultado do serviço e pode ser manipulado em IDOC, Java ou outras linguagens. Iremos utilizar Select Cache Query para retornar a lista de favoritos de um usuário. - Service: Os serviços são os responsáveis por executar as queries (ou classes java, mas isso é papo para um outro tutorial...). O serviço recebe os parâmetros de entrada, executa a query e retorna o ResultSet (no caso de um SELECT). Os serviços podem ser executados através de templates, páginas IDOC, outras aplicações (através de API), ou diretamente na URL do browser. Neste exercício criaremos serviços para Criar, Editar, Deletar e Listar os favoritos de um usuário. - Template: Os templates são as interfaces gráficas (páginas) que serão apresentadas aos usuários. Por exemplo, antes de executar o serviço que deleta um documento do favoritos, quero que o usuário veja uma tela com o ID do Documento e um botão Confirma, para que ele tenha certeza que está deletando o registro correto. Esta tela pode ser criada como um template. Neste exercício iremos construir templates para os principais serviços, além da página que lista todos os favoritos do usuário e apresenta as ações de editar e deletar. Os templates nada mais são do que páginas HTML com scripts IDOC. A nossa sequência de atividades para o desenvolvimento deste componente será: - Criar a Tabela do banco - Criar o componente usando o Component Wizard - Criar as Queries para inserir, editar, deletar e listar os favoritos - Criar os Serviços que executam estas Queries - Criar os templates, que são as páginas que irão interagir com os usuários - Criar os links, na página de informações do conteúdo e no painel My Content Server Pois bem, vamos começar! Confira este tutorial na íntegra clicando neste link: http://blogs.oracle.com/ecmbrasil/Tutorial_Componente_Banco.pdf   Happy coding!  :-)

    Read the article

  • Upgrading to Oracle Enterprise Manager 12c Release 2: Top Tips One Must Know

    - by AnkurGupta
    Recently Oracle announced incremental release of Enterprise Manager 12c called Enterprise Manager 12c Release 2 (EM12c R2) which includes several new exciting features (Press announcement). Right before the official release, we upgraded an internal production site from EM 12c R1 to EM 12c R2 and had an extremely pleasant experience. Let me share few key takeaways as well as few tips from this upgrade exercise. I - Why Should You Upgrade To Enterprise Manager 12c Release 2 While an upgrade is usually recommended primarily to take benefit of the latest features (which is valid for this upgrade as well), I found several other compelling reasons purely from deployment perspective. Standardize your EM deployment:  Enterprise Manager comprises of several different components (OMS, agents, plug-ins, etc) and it might be possible that these are at varied patch levels in your environment. For instance, in case of an environment containing Bundle Patch 1 (customer announcement), there is a good chance that you may not have all the components up-to-date. There are two possible reasons. Bundle Patch 1 involved patching different components (OMS, agents, plug-ins) with multiple one-off patches which may not have been applied to all components yet. Bundle Patch 1 for different platforms were not released together. Which means you may not have got the chance to patch all the components on different platforms. Note: BP1 patches are not mandatory to upgrade to EM12c R2 release EM 12c R2 provides an excellent opportunity to standardize your Cloud Control environment (OMS, repository and agents) and plug-ins to latest versions in single shot. All platform releases are made available simultaneously: For the very first time in the history of EM release, all the platforms were released on day one itself, which means you do not need to wait for platform specific binaries for EM OMS or Agent to perform install or upgrades in a heterogeneous environment. Highly refined and automated process – Upgrade process is by far the smoothest and the cleanest as compared to previous releases of Enterprise manager. Following are the ones that stand out. Automatic Plug-in management – Plug-in upgrade along with new plug-in deployment is supported in upgrade installer wizard which means bulk of the updates to OMS and repository can be done in the same workflow. Saves time and minimizes user inputs. Plug-in Upgrade or Migrate Auto Update: While doing the OMS and repository upgrade, you can use Auto Update screen in Oracle Universal Installer to check for any updates/patches. That will help you to avoid the know issues and will make sure that your upgrade is successful. Allows mass upgrade of EM Agents – A new dedicated menu has been added in the EM console for agent upgrade. Agent upgrade workflow is extremely simple that requires agent name as the only input. ADM / JVMD Manager/Agent upgrade – complete process is supported via UI screens. EM12c R2 Upgrade Guide is much simpler to follow as compared to those for earlier releases. This is attributed to the simpler upgrade process. Robust and Performing Platform: EM12c R2 release not only includes several new features, but also provides a more stable platform which incorporates several fixes and enhancements in the Enterprise Manager framework. II - Few Tips To Remember In my last post (blog link) I shared few tips and tricks from my experience applying the Bundle Patch. Recently I upgraded the same site to EM 12c R2 and found few points that you must take note of, while planning this upgrade. The tips below are also applicable to EM 12c R1 environments that do not have Bundle Patch 1 patches applied. Verify the monitored application certification – Specific targets like E-Business Suite have not yet been certified as managed target in EM 12c R2. Therefore make sure to recheck the Enterprise Manager certification Matrix on My Oracle Support before planning the upgrade. Plan downtime – Because EM 12c R2 is an incremental release of EM 12c, for EM 12c R1 to EM 12c R2 upgrade supports only 1-system upgrade approach, which mean there will be downtime. OMS name change after upgrade – In case of multi OMS environments, additional OMS is renamed after upgrade, which has few implications when you upgrade JVMD and ADP agents on OMS. This is well documented in upgrade guide but make sure you read through all the notes. Upgrading BI Publisher– EM12c R2 is certified with BI Publisher 11.1.1.6.0 only. Therefore in case you are using EM 12c R1 which is integrated with BI Publisher 11.1.1.5.0, you must upgrade the BI Publisher to 11.1.1.6.0. Follow the steps from Advanced Installation and Configuration Guide here. Perform Post upgrade Tasks – Make sure to perform post upgrade steps mentioned in documentation here. These include critical changes that must be done right after upgrade to get the right configuration. For instance Database plug-in should be upgraded to Revision 3 (12.1.0.2.0 [u120804]). Delete old OMS Home – EM12c R1 to EM12c R2 is an out of place upgrade, which means it creates a new oracle home for OMS, plug-ins, etc. Therefore please ensure that You have sufficient extra space for new OMS before starting the upgrade process. You clean up the old OMS home after the upgrade process. Steps are available here. DO NOT remove the agent home on OMS host, because agent is upgraded in-place. If you have standby OMS setup then do look into the steps to upgrade the standby OMS from the upgrade guide before going ahead. Read the right documentation – Make sure to follow the Upgrade guide which provides the most comprehensive information on EM12c R2 upgrade process. Additionally you can refer other resources to get familiar with upgrade concepts. Recorded webcast - Oracle Enterprise Manager Cloud Control 12c Release 2 Installation and Upgrade Overview Presentation - Understanding Enterprise Manager 12.1.0.2 Upgrade We are very excited about this latest release and will look forward to hear back any feedback from your upgrade experience!

    Read the article

  • MySQL Connector/Net 6.4.6 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.4.6, a new version of the all-managed .NET driver for MySQL has been released.  This is a maintenance release and is recommended for use in production environments. It is appropriate for use with MySQL server versions 5.0-5.6. This is intended to be the final release for Connector/NET 6.4. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.4.6 version of MySQL Connector/Net brings the following fixes: - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64409, Oracle bug #13790123). - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error" (MySQL Bug #66578 Orabug #14593547). - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.4.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor

    - by pinaldave
    Let us jump right away with question and answer mode. What is resource governor? Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. Why is resource governor required? If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. What will be the real world example of need of resource governor? Here are two simple scenarios where the resource governor can be very useful. Scenario 1: A server which is running OLTP workload and various resource intensive reports on the same server. The ideal situation is where there are two servers which are data synced with each other and one server runs OLTP transactions and the second server runs all the resource intensive reports. However, not everybody has the luxury to set up this kind of environment. In case of the situation where reports and OLTP transactions are running on the same server, limiting the resources to the reporting workload it can be ensured that OTLP’s critical transaction is not throttled. Scenario 2: There are two DBAs in one organization. One DBA A runs critical queries for business and another DBA B is doing maintenance of the database. At any point in time the DBA A’s work should not be affected but at the same time DBA B should be allowed to work as well. The ideal situation is that when DBA B starts working he get some resources but he can’t get more than defined resources. Does SQL Server have any default resource governor component? Yes, SQL Server have two by default created resource governor component. 1) Internal –This is used by database engine exclusives and user have no control. 2) Default – This is used by all the workloads which are not assigned to any other group. What are the major components of the resource governor? Resource Pools Workload Groups Classification In simple words here is what the process of resource governor is. Create resource pool Create a workload group Create classification function based on the criteria specified Enable Resource Governor with classification function Let me further explain you the same with graphical image. Is it possible to configure resource governor with T-SQL? Yes, here is the code for it with explanation in between. Step 0: Here we are assuming that there are separate login accounts for Reporting server and OLTP server. /*----------------------------------------------- Step 0: (Optional and for Demo Purpose) Create Two User Logins 1) ReportUser, 2) PrimaryUser Use ReportUser login for Reports workload Use PrimaryUser login for OLTP workload -----------------------------------------------*/ Step 1: Creating Resource Pool We are creating two resource pools. 1) Report Server and 2) Primary OLTP Server. We are giving only a few resources to the Report Server Pool as described in the scenario 1 the other server is mission critical and not the report server. ----------------------------------------------- -- Step 1: Create Resource Pool ----------------------------------------------- -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO -- Creating Resource Pool for OLTP Primary Server CREATE RESOURCE POOL PrimaryServerPool WITH ( MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100, MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100) GO Step 2: Creating Workload Group We are creating two workloads each mapping to each of the resource pool which we have just created. ----------------------------------------------- -- Step 2: Create Workload Group ----------------------------------------------- -- Creating Workload Group for Report Server CREATE WORKLOAD GROUP ReportServerGroup USING ReportServerPool ; GO -- Creating Workload Group for OLTP Primary Server CREATE WORKLOAD GROUP PrimaryServerGroup USING PrimaryServerPool ; GO Step 3: Creating user defiled function which routes the workload to the appropriate workload group. In this example we are checking SUSER_NAME() and making the decision of Workgroup selection. We can use other functions such as HOST_NAME(), APP_NAME(), IS_MEMBER() etc. ----------------------------------------------- -- Step 3: Create UDF to Route Workload Group ----------------------------------------------- CREATE FUNCTION dbo.UDFClassifier() RETURNS SYSNAME WITH SCHEMABINDING AS BEGIN DECLARE @WorkloadGroup AS SYSNAME IF(SUSER_NAME() = 'ReportUser') SET @WorkloadGroup = 'ReportServerGroup' ELSE IF (SUSER_NAME() = 'PrimaryServerPool') SET @WorkloadGroup = 'PrimaryServerGroup' ELSE SET @WorkloadGroup = 'default' RETURN @WorkloadGroup END GO Step 4: In this final step we enable the resource governor with the classifier function created in earlier step 3. ----------------------------------------------- -- Step 4: Enable Resource Governer -- with UDFClassifier ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.UDFClassifier); GO ALTER RESOURCE GOVERNOR RECONFIGURE GO Step 5: If you are following this demo and want to clean up your example, you should run following script. Running them will disable your resource governor as well delete all the objects created so far. ----------------------------------------------- -- Step 5: Clean Up -- Run only if you want to clean up everything ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL) GO ALTER RESOURCE GOVERNOR DISABLE GO DROP FUNCTION dbo.UDFClassifier GO DROP WORKLOAD GROUP ReportServerGroup GO DROP WORKLOAD GROUP PrimaryServerGroup GO DROP RESOURCE POOL ReportServerPool GO DROP RESOURCE POOL PrimaryServerPool GO ALTER RESOURCE GOVERNOR RECONFIGURE GO I hope this introductory example give enough light on the subject of Resource Governor. In future posts we will take this same example and learn a few more details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Resource Governor

    Read the article

  • Prevent Changing the Screen Saver and Wallpaper in Windows 7

    - by Mysticgeek
    Sometimes you might not want users to have the ability to change Screen Savers and Wallpaper on Windows 7 workstations. Today we look at how to prevent them from changing either one or both. You might administer computers in your home or small office and find it annoying when users continuously change the wallpaper and Screen Savers to something obnoxious. A lot of times they might be inexperienced users and download these so-called “wonderful and free” Screen Saver/Wallpaper packages from shady sites that include loads of Spyware. Preventing users from changing them is another helpful tool to avoid wasteful time spent switching things back. Prevent Changing Screensavers & Wallpaper Using Group Policy Editor  Note: This method uses Group Policy which is not available in Home versions on Windows 7. Open the Start Menu and enter gpedit.msc into the Search box and hit Enter. When Local Group Policy Editor opens, navigate to User Configuration \ Administrative Templates \ Control Panel \ Personalization. Then in the right column double-click on Prevent changing desktop background. Now check the radio button next to Enabled, then click OK. Back on the Group Policy Screen, double-click on Prevent changing screen saver. In the next screen select the radio button next to Enable, click OK, then close out of Group Policy Editor. Now when a user goes into the Personalization section, the Desktop Background hyperlink is now grayed out and inactive. Notice the message One or more of the settings on this page has been disabled by the system administrator at the bottom of the section. If they click to change the Screen Saver, an error message will pop up letting them know the function is disabled. Prevent Changing Screensavers & Wallpaper Using a Registry Hack You can also make a couple Registry changes to prevent users from changing the Wallpaper & Screen Saver…which will work on Home versions of Windows 7. Before making any Registry changes make sure you back it up first. Open the Registry by typing regedit into the Search box in the Start menu and hit Enter. First we’ll start with the Wallpaper. Navigate to HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\System and create a new String Value and name it Wallpaper. Then modify the Value data to point to the location of the Wallpaper you want it to always be. Where in this example it’s our main wallpaper on our local drive…then click OK. Now let’s make sure they can’t change the Screen Saver. In the same Registry location, we need to make a new DWORD (32-bit) Value. Give it the Value name of NoDispScrSavPage and the value data of “1” and click OK. Close out of the Registry and restart the machine or simply log off then back on again for the changes to take effect. Results For the Wallpapers, a user can still go in and see the selections, however if they try to change it to something else… It will just go back to the Personalization screen and no changes will be made, as we set the value to only be the background we specified. If the user tries to make a change to the Screen Saver, the hyperlink will be grayed out and inactive, and the message One or more of the settings on this page has been disabled by the system administrator will be displayed at the bottom of the section. Conclusion If you’re tired of users changing the Wallpaper and Screen Saver, and want another way to help avoid Malware, locking down these settings can help a lot. Again, before making any changes to the Registry, make sure to back it up. These settings should work in Vista and XP as well. Similar Articles Productive Geek Tips Save 1-4% More Battery Life With Windows Vista Battery SaverCustomize Your Windows Vista Logon ScreenEnable "Ubuntu Style" Logons in Windows VistaManage the Delete Confirmation Dialog box in Windows 7Dual Monitors: Use a Different Wallpaper on Each Desktop TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Change the Default Font Size in Word

    - by Matthew Guay
    Are you frustrated by always having to change the font size before you create a document it Word?  Here’s how you can end that frustration and set your favorite default font size for once and for all! Microsoft changed the default font font to 11 point Calibri in Word 2007 after years of 12 point Times New Roman being the default.  Although it can be easily overlooked, there are ways in Word to change the default settings to anything you want.  Whether you want to change your default to 12 point Calibri or to 48 point Comic Sans…here’s how to change your default font settings in Word 2007 and 2010. Changing Default Fonts in Word To change the default font settings, click the small box with an arrow in the right left corner of the Font section of the Home tab in the Ribbon.   In the Font dialog box, choose the default font settings you want.  Notice in the Font box it says “+Body”; this means that the font will be chosen by the document style you choose, and you are only selecting the default font style and size.  So, if your style uses Calibri, then your font will be Calibri at the size and style you chose.  If you’d prefer to choose a specific font to be the default, just select one from the drop-down box and this selection will override the font selection in your document style. Here we left all the default settings, except we selected 12 point font in the Latin text box (this is your standard body text; users of Asian languages such as Chinese may see a box for Asian languages).  When you’ve made your selections, click the “Set as Default” button in the bottom left corner of the dialog. You will be asked to confirm that you want these settings to be made default.  In Word 2010, you will be given the option to set these settings for this document only or for all documents.  Click the bullet beside “All documents based on the Normal.dotm template?”, and then click Ok. In Word 2007, simply click Ok to save these settings as default. Now, whenever you open Word or create a new document, your default font settings should be set exactly to what you want.  And simply repeat these steps to change your default font settings again if you want. Editing your default template file Another way to change your default font settings is to edit your Normal.dotm file.  This file is what Word uses to create new documents; it basically copies the formatting in this document each time you make a new document. To edit your Normal.dotm file, enter the following in the address bar in Explorer or in the Run prompt: %appdata%\Microsoft\Templates This will open your Office Templates folder.  Right-click on the Normal.dotm file, and click Open to edit it.  Note: Do not double-click on the file, as this will only create a new document based on Normal.dotm and any edits you make will not be saved in this file.   Now, change any font settings as you normally would.  Remember: anything you change or enter in this document will appear in any new document you create using Word. If you want to revert to your default settings, simply delete your Normal.dotm file.  Word will recreate it with the standard default settings the next time you open Word. Please Note: Changing your default font size will not change the font size in existing documents, so these will still show the settings you used when these documents were created.  Also, some addins can affect your Normal.dotm template.  If Word does not seem to remember your font settings, try disabling Word addins to see if this helps. Conclusion Sometimes it’s the small things that can be the most frustrating.  Getting your default font settings the way you want is a great way to take away a frustration and make you more productive. And here’s a quick question: Do you prefer the new default 11 point Calibri, or do you prefer 12 point Times New Roman or some other combination?  Sound off in the comments, and let the world know your favorite font settings. Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Add Emphasis to Paragraphs with Drop Caps in Word 2007Keep Websites From Using Tiny Fonts in SafariMake Word 2007 Always Save in Word 2003 FormatStupid Geek Tricks: Enable More Fonts for the Windows Command Prompt TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player

    Read the article

  • ASP.NET Web API - Screencast series Part 4: Paging and Querying

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we modified data on the server using DELETE and POST methods. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying. This part shows two approaches to querying data (paging really just being a specific querying case) - you can do it yourself using parameters passed in via querystring (as well as headers, other route parameters, cookies, etc.). You're welcome to do that if you'd like. What I think is more interesting here is that Web API actions that return IQueryable automatically support OData query syntax, making it really easy to support some common query use cases like paging and filtering. A few important things to note: This is just support for OData query syntax - you're not getting back data in OData format. The screencast demonstrates this by showing the GET methods are continuing to return the same JSON they did previously. So you don't have to "buy in" to the whole OData thing, you're just able to use the query syntax if you'd like. This isn't full OData query support - full OData query syntax includes a lot of operations and features - but it is a pretty good subset: filter, orderby, skip, and top. All you have to do to enable this OData query syntax is return an IQueryable rather than an IEnumerable. Often, that could be as simple as using the AsQueryable() extension method on your IEnumerable. Query composition support lets you layer queries intelligently. If, for instance, you had an action that showed products by category using a query in your repository, you could also support paging on top of that. The result is an expression tree that's evaluated on-demand and includes both the Web API query and the underlying query. So with all those bullet points and big words, you'd think this would be hard to hook up. Nope, all I did was change the return type from IEnumerable<Comment> to IQueryable<Comment> and convert the Get() method's IEnumerable result using the .AsQueryable() extension method. public IQueryable<Comment> GetComments() { return repository.Get().AsQueryable(); } You still need to build up the query to provide the $top and $skip on the client, but you'd need to do that regardless. Here's how that looks: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize); $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); And the neat thing is that - without any modification to our server-side code - we can modify the above jQuery call to request the comments be sorted by author: $(function () { //--------------------------------------------------------- // Using Queryable to page //--------------------------------------------------------- $("#getCommentsQueryable").click(function () { viewModel.comments([]); var pageSize = $('#pageSize').val(); var pageIndex = $('#pageIndex').val(); var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize) + '&$orderby=Author'; $.getJSON(url, function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); return false; }); }); So if you want to make use of OData query syntax, you can. If you don't like it, you're free to hook up your filtering and paging however you think is best. Neat. In Part 5, we'll add on support for Data Annotation based validation using an Action Filter.

    Read the article

  • Out of disk space - /boot at 100%

    - by uvasal
    My /boot is at 100%. When I run aptitude search ~ilinux-image I'm getting loads of unused images. When I try to delete one of them (after checking which one is currently in use by doing uname -r), e.g apt-get autoremove linux-image-3.2.0-44-generic I get: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.51.61) but 3.2.0.54.64 is to be installed linux-server : Depends: linux-headers-server (= 3.2.0.51.61) but 3.2.0.54.64 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And running apt-get -f install throws No space left on device. I've also tried doing apt-get purge but I am getting the same thing. Output of df -h and dpkg -l linux-*.: root@hb2088:/srv/www# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 9.4G 3.0G 6.0G 34% / udev 301M 4.0K 301M 1% /dev tmpfs 124M 228K 124M 1% /run none 5.0M 0 5.0M 0% /run/lock none 309M 0 309M 0% /run/shm /dev/sda1 92M 91M 0 100% /boot root@hb2088:/srv/www# dpkg -l linux-* Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-====================================================-====================================================-======================================================================================================================== un linux-doc-3.2.0 <none> (no description available) ii linux-firmware 1.79.6 Firmware for Linux kernel drivers iU linux-generic 3.2.0.51.61 Complete Generic Linux kernel un linux-headers <none> (no description available) un linux-headers-3 <none> (no description available) un linux-headers-3.0 <none> (no description available) ii linux-headers-3.2.0-44 3.2.0-44.69 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-44-generic 3.2.0-44.69 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-45 3.2.0-45.70 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-45-generic 3.2.0-45.70 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-48 3.2.0-48.74 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-48-generic 3.2.0-48.74 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-51 3.2.0-51.77 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-51-generic 3.2.0-51.77 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-52 3.2.0-52.78 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-52-generic 3.2.0-52.78 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP iU linux-headers-3.2.0-54 3.2.0-54.82 Header files related to Linux kernel version 3.2.0 iU linux-headers-3.2.0-54-generic 3.2.0-54.82 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP iU linux-headers-generic 3.2.0.54.64 Generic Linux kernel headers iU linux-headers-server 3.2.0.54.64 Linux kernel headers on Server Equipment. un linux-image <none> (no description available) un linux-image-3.0 <none> (no description available) ii linux-image-3.2.0-44-generic 3.2.0-44.69 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-45-generic 3.2.0-45.70 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-48-generic 3.2.0-48.74 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-51-generic 3.2.0-51.77 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-52-generic 3.2.0-52.78 Linux kernel image for version 3.2.0 on 64 bit x86 SMP in linux-image-3.2.0-54-generic <none> (no description available) iU linux-image-generic 3.2.0.51.61 Generic Linux kernel image iU linux-image-server 3.2.0.51.61 Linux kernel image on Server Equipment. un linux-initramfs-tool <none> (no description available) un linux-kernel-headers <none> (no description available) un linux-kernel-log-daemon <none> (no description available) ii linux-libc-dev 3.2.0-52.78 Linux Kernel Headers for development un linux-restricted-common <none> (no description available) iU linux-server 3.2.0.51.61 Complete Linux kernel on Server Equipment. un linux-source-3.2.0 <none> (no description available) un linux-tools <none> (no description available) Output of du -sh /boot/*: root@hb2088:~# du -sh /boot/* 781K /boot/abi-3.2.0-44-generic 781K /boot/abi-3.2.0-45-generic 781K /boot/abi-3.2.0-48-generic 781K /boot/abi-3.2.0-51-generic 781K /boot/abi-3.2.0-52-generic 139K /boot/config-3.2.0-44-generic 139K /boot/config-3.2.0-45-generic 139K /boot/config-3.2.0-48-generic 139K /boot/config-3.2.0-51-generic 139K /boot/config-3.2.0-52-generic 1.6M /boot/grub 14M /boot/initrd.img-3.2.0-44-generic 14M /boot/initrd.img-3.2.0-45-generic 14M /boot/initrd.img-3.2.0-48-generic 12K /boot/lost+found 174K /boot/memtest86+.bin 176K /boot/memtest86+_multiboot.bin 2.8M /boot/System.map-3.2.0-44-generic 2.8M /boot/System.map-3.2.0-45-generic 2.8M /boot/System.map-3.2.0-48-generic 2.8M /boot/System.map-3.2.0-51-generic 2.8M /boot/System.map-3.2.0-52-generic 4.8M /boot/vmlinuz-3.2.0-44-generic 4.8M /boot/vmlinuz-3.2.0-45-generic 4.8M /boot/vmlinuz-3.2.0-48-generic 4.8M /boot/vmlinuz-3.2.0-51-generic 4.8M /boot/vmlinuz-3.2.0-52-generic

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • Recover that Photo, Picture or File You Deleted Accidentally

    - by The Geek
    Have you ever accidentally deleted a photo on your camera, computer, USB drive, or anywhere else? What you might not know is that you can usually restore those pictures—even from your camera’s memory stick. Windows tries to prevent you from making a big mistake by providing the Recycle Bin, where deleted files hang around for a while—but unfortunately it doesn’t work for external USB drives, USB flash drives, memory sticks, or mapped drives. The great news is that this technique also works if you accidentally deleted the photo… from the camera itself. That’s what happened to me, and prompted writing this article. Restore that File or Photo using Recuva The first piece of software that you’ll want to try is called Recuva, and it’s extremely easy to use—just make sure when you are installing it, that you don’t accidentally install that stupid Yahoo! toolbar that nobody wants. Now that you’ve installed the software, and avoided an awful toolbar installation, launch the Recuva wizard and let’s start through the process of recovering those pictures you shouldn’t have deleted. The first step on the wizard page will let you tell Recuva to only search for a specific type of file, which can save a lot of time while searching, and make it easier to find what you are looking for. Next you’ll need to specify where the file was, which will obviously be up to wherever you deleted it from. Since I deleted mine from my camera’s SD card, that’s where I’m looking for it. The next page will ask you whether you want to do a Deep Scan. My recommendation is to not select this for the first scan, because usually the quick scan can find it. You can always go back and run a deep scan a second time. And now, you’ll see all of the pictures deleted from your drive, memory stick, SD card, or wherever you searched. Looks like what happened in Vegas didn’t stay in Vegas after all… If there are a really large number of results, and you know exactly when the file was created or modified, you can switch to the advanced view, where you can sort by the last modified time. This can help speed up the process quite a bit, so you don’t have to look through quite as many files. At this point, you can right-click on any filename, and choose to Recover it, and then save the files elsewhere on your drive. Awesome! Restore that File or Photo using DiskDigger If you don’t have any luck with Recuva, you can always try out DiskDigger, another excellent piece of software. I’ve tested both of these applications very thoroughly, and found that neither of them will always find the same files, so it’s best to have both of them in your toolkit. Note that DiskDigger doesn’t require installation, making it a really great tool to throw on your PC repair Flash drive. Start off by choosing the drive you want to recover from…   Now you can choose whether to do a deep scan, or a really deep scan. Just like with Recuva, you’ll probably want to select the first one first. I’ve also had much better luck with the regular scan, rather than the “dig deeper” one. If you do choose the “dig deeper” one, you’ll be able to select exactly which types of files you are looking for, though again, you should use the regular scan first. Once you’ve come up with the results, you can click on the items on the left-hand side, and see a preview on the right.  You can select one or more files, and choose to restore them. It’s pretty simple! Download DiskDigger from dmitrybrant.com Download Recuva from piriform.com Good luck recovering your deleted files! And keep in mind, DiskDigger is a totally free donationware software from a single, helpful guy… so if his software helps you recover a photo you never thought you’d see again, you might want to think about throwing him a dollar or two. Similar Articles Productive Geek Tips Stupid Geek Tricks: Undo an Accidental Move or Delete With a Keyboard ShortcutRestore Accidentally Deleted Files with RecuvaCustomize Your Welcome Picture Choices in Windows VistaAutomatically Resize Picture Attachments in Outlook 2007Resize Your Photos with Easy Thumbnails TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Login failed for user 'sa' because the account is currently locked out. The system administrator can

    - by cabhilash
    Login failed for user 'sa' because the account is currently locked out. The system administrator can unlock it. (Microsoft SQL Server, Error: 18486) SQL server has local password policies. If policy is enabled which locks down the account after X number of failed attempts then the account is automatically locked down.This error with 'sa' account is very common. sa is default administartor login available with SQL server. So there are chances that an ousider has tried to bruteforce your system. (This can cause even if a legitimate tries to access the account with wrong password.Sometimes a user would have changed the password without informing others. So the other users would try to lo) You can unlock the account with the following options (use another admin account or connect via windows authentication) Alter account & unlock ALTER LOGIN sa WITH PASSWORD='password' UNLOCK Use another account Almost everyone is aware of the sa account. This can be the potential security risk. Even if you provide strong password hackers can lock the account by providing the wrong password. ( You can provide extra security by installing firewall or changing the default port but these measures are not always practical). As a best practice you can disable the sa account and use another account with same privileges.ALTER LOGIN sa DISABLE You can edit the lock-ot options using gpedit.msc( in command prompt type gpedit.msc and press enter). Navigate to Account Lokout policy as shown in the figure The Following options are available Account lockout threshold This security setting determines the number of failed logon attempts that causes a user account to be locked out. A locked-out account cannot be used until it is reset by an administrator or until the lockout duration for the account has expired. You can set a value between 0 and 999 failed logon attempts. If you set the value to 0, the account will never be locked out. Failed password attempts against workstations or member servers that have been locked using either CTRL+ALT+DELETE or password-protected screen savers count as failed logon attempts. Account lockout duration This security setting determines the number of minutes a locked-out account remains locked out before automatically becoming unlocked. The available range is from 0 minutes through 99,999 minutes. If you set the account lockout duration to 0, the account will be locked out until an administrator explicitly unlocks it. If an account lockout threshold is defined, the account lockout duration must be greater than or equal to the reset time. Default: None, because this policy setting only has meaning when an Account lockout threshold is specified. Reset account lockout counter after This security setting determines the number of minutes that must elapse after a failed logon attempt before the failed logon attempt counter is reset to 0 bad logon attempts. The available range is 1 minute to 99,999 minutes. If an account lockout threshold is defined, this reset time must be less than or equal to the Account lockout duration. Default: None, because this policy setting only has meaning when an Account lockout threshold is specified.When creating SQL user you can set CHECK_POLICY=on which will enforce the windows password policy on the account. The following policies will be applied Define the Enforce password history policy setting so that several previous passwords are remembered. With this policy setting, users cannot use the same password when their password expires.  Define the Maximum password age policy setting so that passwords expire as often as necessary for your environment, typically, every 30 to 90 days. With this policy setting, if an attacker cracks a password, the attacker only has access to the network until the password expires.  Define the Minimum password age policy setting so that passwords cannot be changed until they are more than a certain number of days old. This policy setting works in combination with the Enforce password historypolicy setting. If a minimum password age is defined, users cannot repeatedly change their passwords to get around the Enforce password history policy setting and then use their original password. Users must wait the specified number of days to change their passwords.  Define a Minimum password length policy setting so that passwords must consist of at least a specified number of characters. Long passwords--seven or more characters--are usually stronger than short ones. With this policy setting, users cannot use blank passwords, and they have to create passwords that are a certain number of characters long.  Enable the Password must meet complexity requirements policy setting. This policy setting checks all new passwords to ensure that they meet basic strong password requirements.  Password must meet the following complexity requirement, when they are changed or created: Not contain the user's entire Account Name or entire Full Name. The Account Name and Full Name are parsed for delimiters: commas, periods, dashes or hyphens, underscores, spaces, pound signs, and tabs. If any of these delimiters are found, the Account Name or Full Name are split and all sections are verified not to be included in the password. There is no check for any character or any three characters in succession. Contain characters from three of the following five categories:  English uppercase characters (A through Z) English lowercase characters (a through z) Base 10 digits (0 through 9) Non-alphabetic characters (for example, !, $, #, %) A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific.

    Read the article

  • Create Orchard Module in a Separate Project

    - by Steve Michelotti
    The Orchard Project is a new OOS Microsoft project that is being developed up on CodePlex. From the Orchard home page on CodePlex, it states “Orchard project is focused on delivering a .NET-based CMS application that will allow users to rapidly create content-driven Websites, and an extensibility framework that will allow developers and customizers to provide additional functionality through modules and themes.” The Orchard Project site contains additional information including documentation and walkthroughs. The ability to create a composite solution based on a collection of modules is a compelling feature. In Orchard, these modules can just be created as simple MVC Areas or they can also be created inside of stand-alone web application projects.  The walkthrough for writing an Orchard module that is available on the Orchard site uses a simple Area that is created inside of the host application. It is based on the Orchard MIX presentation. This walkthrough does an effective job introducing various Orchard concepts such as hooking into the navigation system, theme/layout system, content types, and more.  However, creating an Orchard module in a separate project does not seem to be concisely documented anywhere. Orchard ships with several module OOTB that are in separate assemblies – but again, it’s not well documented how to get started building one from scratch. The following are the steps I took to successfully get an Orchard module in a separate project up and running. Step 1 – Download the OrchardIIS.zip file from the Orchard Release page. Unzip and open up the solution. Step 2 – Add your project to the solution. I named my project “Orchard.Widget” and used and “MVC 2 Empty Web Application” project type. Make sure you put the physical path inside the “Modules” sub-folder to the main project like this: At this point the solution should look like: Step 3 – Add assembly references to Orchard.dll and Orchard.Core.dll. Step 4 – Add a controller and view.  I’ll just create a Hello World controller and view. Notice I created the view as a partial view (*.ascx). Also add the [Themed] attribute to the top of the HomeController class just like the normal Orchard walk through shows it. Step 5 – Add Module.txt to the project root. The is a very important step. Orchard will not recognize your module without this text file present.  It can contain just the name of your module: name: Widget Step 6 – Add Routes.cs. Notice I’ve given an area name of “Orchard.Widget” on lines 26 and 33. 1: using System; 2: using System.Collections.Generic; 3: using System.Web.Mvc; 4: using System.Web.Routing; 5: using Orchard.Mvc.Routes; 6:   7: namespace Orchard.Widget 8: { 9: public class Routes : IRouteProvider 10: { 11: public void GetRoutes(ICollection<RouteDescriptor> routes) 12: { 13: foreach (var routeDescriptor in GetRoutes()) 14: { 15: routes.Add(routeDescriptor); 16: } 17: } 18:   19: public IEnumerable<RouteDescriptor> GetRoutes() 20: { 21: return new[] { 22: new RouteDescriptor { 23: Route = new Route( 24: "Widget/{controller}/{action}/{id}", 25: new RouteValueDictionary { 26: {"area", "Orchard.Widget"}, 27: {"controller", "Home"}, 28: {"action", "Index"}, 29: {"id", ""} 30: }, 31: new RouteValueDictionary(), 32: new RouteValueDictionary { 33: {"area", "Orchard.Widget"} 34: }, 35: new MvcRouteHandler()) 36: } 37: }; 38: } 39: } 40: } Step 7 – Add MainMenu.cs. This will make sure that an item appears in the main menu called “Widget” which points to the module. 1: using System; 2: using Orchard.UI.Navigation; 3:   4: namespace Orchard.Widget 5: { 6: public class MainMenu : INavigationProvider 7: { 8: public void GetNavigation(NavigationBuilder builder) 9: { 10: builder.Add(menu => menu.Add("Widget", item => item.Action("Index", "Home", new 11: { 12: area = "Orchard.Widget" 13: }))); 14: } 15:   16: public string MenuName 17: { 18: get { return "main"; } 19: } 20: } 21: } Step 8 – Clean up web.config. By default Visual Studio adds numerous sections to the web.config. The sections that can be removed are: appSettings, connectionStrings, authentication, membership, profile, and roleManager. Step 9 – Delete Global.asax. This project will ultimately be running from inside the Orchard host so this “sub-site” should not have its own Global.asax.   Now you’re ready the run the app.  When you first run it, the “Widget” menu item will appear in the main menu because of the MainMenu.cs file we added: We can then click the “Widget” link in the main menu to send us over to our view:   Packaging From start to finish, it’s a relatively painless experience but it could be better. For example, a Visual Studio project template that encapsulates aspects from this blog post would definitely make it a lot easier to get up and running with creating an Orchard module.  Another aspect I found interesting is that if you read the first paragraph of the walkthrough, it says, “You can also develop modules as separate projects, to be packaged and shared with other users of Orchard CMS (the packaging story is still to be defined, along with marketplaces for sharing modules).” In particular, I will be extremely curious to see how the “packaging story” evolves. The first thing that comes to mind for me is: what if we explored MvcContrib Portable Areas as a potential mechanism for this packaging? This would certainly make things easy since all artifacts (aspx, aspx, images, css, javascript) are all wrapped up into a single assembly. Granted, Orchard does have its own infrastructure for layouts and themes but it seems like integrating portable areas into this pipeline would not be a difficult undertaking. Maybe that’ll be the next research task. :)

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle’s Web Experience Management

    - by Christie Flanagan
    Today’s guest post on Oracle’s Web Experience Management comes from a member of our WebCenter Evangelist team, Noël Jaffré, a Principal Technologist based in France.Oracle’s Web Experience Management (WEM) solution enables organizations to optimize the online channel for driving marketing and customer experience management success. It empowers business users to manage the web presence and create rich and engaging online experiences for customers and prospects. Oracle's WEM platform provides a framework to simplify the integration of Oracle, third-party and custom-built applications. This framework essentially allows the creation and integration of applications using one single business interface called the WEM interface. It includes the following: Single sign-on access control for all integrated applications using the Central Authentication Service (CAS) component. A single centralized administration window for user, role, and native applications management including site management. Community server management, gadget server management as well as management for partner integrated technologies. A Representational State Transfer (REST) API for accessing WebCenter Sites data. REST services are supported on both Oracle WebCenter Sites and Oracle WebCenter Sites Satellite Server to leverage the satellite server cache. All REST requests are cached for web consuming applications as well for the high performance delivery of native applications on the mobile channel. Oracle WebCenter Sites’ Web Experience Management environment enables organizations to deliver a compelling online experience to customers by simplifying the deployment and management of sophisticated and engaging websites. The WebCenter Sites platform automates the entire process of managing web content including: Authoring:  Business users can easily contribute and manage web content in real-time, with intuitive interfaces and drag-and-drop content authoring and layout capabilities designed for the non-technical user. Contextual Content Targeting: Marketers are empowered to create and manage targeted campaigns with relevant recommendations and promotions based on the context of the session of the visitor such as his or her navigation history, user profile, language, location or other information shared during the visitor session. Content Publishing and Deployment: It offers advanced multi-site management capabilities for departmental or regional sites, as well as strong multi-lingual and multi-locale content management. The remote satellite server caching infrastructure provides high-performance, distributed caching, tuned to deliver high-volume, targeted and multi-lingual sites. Analytics and Optimization: Business users and marketers have the ability to measure the effectiveness of their online content and campaigns at a granular level. Editors and marketers can immediately determine whether a given article or promotion is relevant to a particular customer segment. User-generated Content: Marketers can enable blogs, comments, rating and reviews on the website.  All comments and reviews posted to the website can be moderated from the administrator interface either manually or automatically using filters, whitelists, blacklists or community based moderation. Personalized Gadget Dashboards:  Site managers can deploy gadgets, small applications using web content, individually or as part of dashboards containing multiple gadgets.  These gadget dashboards enable site visitors to create their own “MyPage” on a given site where they can select and customize the gadgets that the site administrator has made available.  Any gadget that conforms to the iGoogle/OpenSocial standard can be made available to site visitors, or they can be created within the WEM interface. Oracle's WEM platform also provides a unique environment for the delivery of a rich, multichannel online experience for site visitors through its advanced management modules for mobile. With Oracle’s WEM solution, it’s easy to control branding and deliver a consistent message while repurposing web content for publication to mobile devices, kiosks and much more. This distinctive approach provides: HTML5 Delivery: HTML5 delivery which includes native support for adaptive design that responds to the user’s computer screen resolution and orientation. The approach is less driven by the particular hardware and more driven by the user’s interactions with the device. In other words, this approach takes both the screen interactions (either cursor or touch) and screen sizes and orientation into consideration. A Unique Native Mobile Extension Environment for Contributors: From the WEM interface, a contributor can directly manage their mobile channel, using the tooling already in place for driving the traditional web presence. This includes the mobile presentation, as well as mobile insite editing, drag and drop page layout, and in-context recommendations and personalization. Optimized REST APIs for High Performance Content Delivery on Native Mobile Device Applications: WebCenter Sites’ REST API uses the underlying HTTP methods (GET, POST, PUT, DELETE) to interact with resources. Resources support two types of input and output formats -- XML and JSON. REST calls are customizable to optimize the interactions between the content repositories and the client applications. Caching is essential to decrease network loads and improve overall reliability and usability of the applications and user interactions. REST results are cached through the highly efficient Oracle WebCenter Sites caching architecture.

    Read the article

  • Event Logging in LINQ C# .NET

    The first thing you'll want to do before using this code is to create a table in your database called TableHistory: CREATE TABLE [dbo].[TableHistory] (     [TableHistoryID] [int] IDENTITY NOT NULL ,     [TableName] [varchar] (50) NOT NULL ,     [Key1] [varchar] (50) NOT NULL ,     [Key2] [varchar] (50) NULL ,     [Key3] [varchar] (50) NULL ,     [Key4] [varchar] (50) NULL ,     [Key5] [varchar] (50) NULL ,     [Key6] [varchar] (50)NULL ,     [ActionType] [varchar] (50) NULL ,     [Property] [varchar] (50) NULL ,     [OldValue] [varchar] (8000) NULL ,     [NewValue] [varchar] (8000) NULL ,     [ActionUserName] [varchar] (50) NOT NULL ,     [ActionDateTime] [datetime] NOT NULL ) Once you have created the table, you'll need to add it to your custom LINQ class (which I will refer to as DboDataContext), thus creating the TableHistory class. Then, you'll need to add the History.cs file to your project. You'll also want to add the following code to your project to get the system date: public partial class DboDataContext{ [Function(Name = "GetDate", IsComposable = true)] public DateTime GetSystemDate() { MethodInfo mi = MethodBase.GetCurrentMethod() as MethodInfo; return (DateTime)this.ExecuteMethodCall(this, mi, new object[] { }).ReturnValue; }}private static Dictionary<type,> _cachedIL = new Dictionary<type,>();public static T CloneObjectWithIL<t>(T myObject){ Delegate myExec = null; if (!_cachedIL.TryGetValue(typeof(T), out myExec)) { // Create ILGenerator DynamicMethod dymMethod = new DynamicMethod("DoClone", typeof(T), new Type[] { typeof(T) }, true); ConstructorInfo cInfo = myObject.GetType().GetConstructor(new Type[] { }); ILGenerator generator = dymMethod.GetILGenerator(); LocalBuilder lbf = generator.DeclareLocal(typeof(T)); //lbf.SetLocalSymInfo("_temp"); generator.Emit(OpCodes.Newobj, cInfo); generator.Emit(OpCodes.Stloc_0); foreach (FieldInfo field in myObject.GetType().GetFields( System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic)) { // Load the new object on the eval stack... (currently 1 item on eval stack) generator.Emit(OpCodes.Ldloc_0); // Load initial object (parameter) (currently 2 items on eval stack) generator.Emit(OpCodes.Ldarg_0); // Replace value by field value (still currently 2 items on eval stack) generator.Emit(OpCodes.Ldfld, field); // Store the value of the top on the eval stack into // the object underneath that value on the value stack. // (0 items on eval stack) generator.Emit(OpCodes.Stfld, field); } // Load new constructed obj on eval stack -> 1 item on stack generator.Emit(OpCodes.Ldloc_0); // Return constructed object. --> 0 items on stack generator.Emit(OpCodes.Ret); myExec = dymMethod.CreateDelegate(typeof(Func<t,>)); _cachedIL.Add(typeof(T), myExec); } return ((Func<t,>)myExec)(myObject);}I got both of the above methods off of the net somewhere (maybe even from CodeProject), but it's been long enough that I can't recall where I got them.Explanation of the History ClassThe History class records changes by creating a TableHistory record, inserting the values for the primary key for the table being modified into the Key1, Key2, ..., Key6 columns (if you have more than 6 values that make up a primary key on any table, you'll want to modify this), setting the type of change being made in the ActionType column (INSERT, UPDATE, or DELETE), old value and new value if it happens to be an update action, and the date and Windows identity of the user who made the change.Let's examine what happens when a call is made to the RecordLinqInsert method:public static void RecordLinqInsert(DboDataContext dbo, IIdentity user, object obj){ TableHistory hist = NewHistoryRecord(obj); hist.ActionType = "INSERT"; hist.ActionUserName = user.Name; hist.ActionDateTime = dbo.GetSystemDate(); dbo.TableHistories.InsertOnSubmit(hist);}private static TableHistory NewHistoryRecord(object obj){ TableHistory hist = new TableHistory(); Type type = obj.GetType(); PropertyInfo[] keys; if (historyRecordExceptions.ContainsKey(type)) { keys = historyRecordExceptions[type].ToArray(); } else { keys = type.GetProperties().Where(o => AttrIsPrimaryKey(o)).ToArray(); } if (keys.Length > KeyMax) throw new HistoryException("object has more than " + KeyMax.ToString() + " keys."); for (int i = 1; i <= keys.Length; i++) { typeof(TableHistory) .GetProperty("Key" + i.ToString()) .SetValue(hist, keys[i - 1].GetValue(obj, null).ToString(), null); } hist.TableName = type.Name; return hist;}protected static bool AttrIsPrimaryKey(PropertyInfo pi){ var attrs = from attr in pi.GetCustomAttributes(typeof(ColumnAttribute), true) where ((ColumnAttribute)attr).IsPrimaryKey select attr; if (attrs != null && attrs.Count() > 0) return true; else return false;}RecordLinqInsert takes as input a data context which it will use to write to the database, the user, and the LINQ object to be recorded (a single object, for instance, a Customer or Order object if you're using AdventureWorks). It then calls the NewHistoryRecord method, which uses LINQ to Objects in conjunction with the AttrIsPrimaryKey method to pull all the primary key properties, set the Key1-KeyN properties of the TableHistory object, and return the new TableHistory object. The code would be called in an application, like so: Continue span.fullpost {display:none;}

    Read the article

  • Happy 3rd Birthday SilverlightCream!

    - by Dave Campbell
    Happy 3rd Birthday!     Yesterday (May 16) was the 'Birthday' of SilverlightCream, which started just after MIX in 2007 with a post "Interesting Silverlight posts today: Silverlight Control & Silverlight Pad". Too many good posts flying around led me to want to archive them, particularly since I was being aggregated at a new site Silverlight.net, and I could give some of that 'reach' to the community. Saturday's post was number 862, and as of that post, there were 5697 blog posts archived in the database all tagged up and searchable at SilverlightCream.com using the search page. The search needs to be better, and that's another discussion, but it does work. The blog didn't begin life as the SilverlightCream blog, as is obvious from the name, but once I realized people were following it closely, I've tried to keep the signal-to-noise ratio very high. I even secured another blog for when I just want to rant about something to keep that stuff out of this one :) If you've been around since MIX07 days you've heard all this, but after talking to some people at MIX10 I realized not everyone knows all the ways the information is presented, so I figured doing a post like this once a year probably isn't a bad idea :) I scrounge through an ever-growing list of blogs (right now sitting at 505) looking for good stuff. I try to spin through the list every day, but with the list growing that large, it's getting tough. I usually use it as a background task while working or watching TV. If I just sit and go through the blogs it takes about an hour. The list is long enough now that from time to time, I'll only get partway through it and have 10 to 13 entries, so I'll just stop there and go on the next day... I don't like to have more than 15 in any single post. It's all pattern recognition as in "seen that", "seen that", "that's new", etc... so if you're a blogger, look at a heading below for some comments about blogging from my perspective. When I see something new, I make sure you're not pulling a 'Mike Taulty' on me and dumping 6 or 8 new posts in one day :), and I tag the ones I want to review. If there's not a lot going on, I may just push the posts as I come across them. Some days there may be 60 posts in that 'to review' list! Some are non-Silverlight, some are essentially duplicates of others, some are demos, ads, new releases of something, session materials, etc. I push lots of material into a database at WynApse.com, and the "Tagged Posts" menu on the left sidebar there takes you to a tag cloud of (at this very moment) "9224 articles tagged 13915 different ways using 459 unique tags". There are links in there on Gibson guitars, Jazz Guitar instructional stuff, Ford F-250 links, and tons of technical and non-technical stuff I've been aggregating for about 5 years now. So when I decide to blog (or shoutout) something, I first push it into the database at WynApse.com. Then I tag it all up and push it into the database at SilverlightCream.com. Then it gets pushed to @SilverlightNews. For a little over a year now, we're tracking unique IP hits on posts launched from either the blog post or from one of the SilverlightCream.com pages, and the posts with top hits from unique IP addresses in the last 7 days are displayed in a 'Skim' page at SilverlightCream... and that page needs work as well. The Skim page and tracking was the brainchild of my buddy Michael Washington. What I blog/shoutout After some time doing posts, I decided there were things that probably have no need to be searchable, but are good information, so I post those as 'Shoutouts'. Eventually I also decided the Shoutouts should get posted to @SilverlightNews, and that's now taking place. Notes to bloggers Remember I said spinning throught the Big List-o-BlogsTM is pattern recognition... that means I don't spend a lot of time on any individual blog deciding if it has new content. If you're familiar with the term 'Above the Fold', then you're probably ok. If I have to scroll the page to see if there's something new, or wade through some maze of menus, I'm probably going to miss new stuff. Likewise if you only show the latest on the front page and make it a puzzle to find the rest of them, or if you make the titles and initial graphics almost identical to the previous article, I'll miss it. Another thing is name/brand-recognition. Far be it for me (WynApse) to comment on someone blogging with a pseudonym, but if you want to get get some recognition, you are going to want your name to be available somewhere. I can think right off the top of my head of a couple good blogs that I have no idea of the individuals' real names. I can pull that off a bit because I've been around so long almost everyone knows who I am, but if you're new to the blog-o-sphere, being able to be name-recognized is as important as getting your brand out there. Kick my tires Finally, stuff happens... I may hit the wrong key and delete your blog, or a post might slip past me and I not realize it's new because of the naming, and never blog it. If you think I missed something, send me an email or use the submit page at SilverlightCream.com. Some bloggers have figured out that if they submit (one way or another) to me, their posts will go out next. I try to honor anyone that takes the time to submit with a quicker 'Cream posting. Thanks! Finally, thanks to everyone that contributes to the community as a whole... the blogs, the videos, and the presentations. A special thanks to everyone that reads SilverlightCream, or follows @WynApse or @SilverlightNews. Keep it all coming, and... Stay in the 'Light

    Read the article

  • MySQL Connector/Net 6.5.5 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.5.5, a new maintenance release of our 6.5 series, has been released.  This release is GA quality and is appropriate for use in production environments.  Please note that 6.6 is our latest driver series and is the recommended product for development. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.5.5 version of MySQL Connector/Net brings the following fixes: - Fix for ArgumentNull exception when using Take().Count() in a LINQ to Entities query (bug MySql #64749, Oracle bug #13913047). - Fix for type varchar changed to bit when saving in Table Designer (Oracle bug #13916560). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix and code contribution for bug Timed out sessions are removed without notification which allow to enable the Expired CallBack when Session Provider times out any session (bug MySql #62266 Oracle bug # 13354935) - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64470, Oracle bug #13790123) - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error". The issue was due to a missing reading of Max_allowed_packet server property when CacheServerProperties is in true, since the value was read only in the first connection but the following pooled connections had a wrong value causing a Packet too large error. Including also a unit test for this scenario. All unit test passed. MySQL Bug #66578 Orabug #14593547. - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Added ANTLR attribution notice (Oracle bug #14379162). - Fixed Entity Framework + mysql connector/net in partial trust throws exceptions (MySql bug #65036, Oracle bug #14668820). - Added support in Parser for Datetime and Time types with precision when using Server 5.6 (No bug Number). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.5.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >