Search Results

Search found 11994 results on 480 pages for 'mixed mode'.

Page 128/480 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Extended service set help [closed]

    - by Cygnus X
    Sorry, this is going to be long and rambly. So, at home my network is set up as such: I have a "master router" that handles DHCP, from there I have a basic 5 port switch, and coming off that, I have a second router that I put in access point mode so that I could have better wireless coverage in my house. I cant seem to access it through a web browser since the master router kicked in and assigned it an ip address. But, i told you all that story, so that i could tell you this story. At the company I work for, they are trying to set up wireless cameras, according to the manual for these, in order to connect these cameras to our network, we need to get into the wireless router and associate the router with the cameras, but the problem is, the network is setup like my home network. Theres a "master router" which is a windows 2003 server, and then a wireless router in AP mode dishing out wireless access. I cant get into the router (by the way I'm not the system admin for the company, nor do i know jack squat about windows server , nor can I easily get in contact with the system admin. (quite the cluster f*k, eh?)) I've tried using wireshark to find the routers ip, dug through arp tables, tried plugging straight into the router, but I cant seem to get into the damn thing. Any ideas/suggestions would be greatly appreciated.

    Read the article

  • How to Boot or "enter" into ubnutu 11.04 (Natty Narwhall) on PS3 after installing it?

    - by Xdm
    After failing to install ubuntu 11.04 Desktop version (burned on a dvd, because iso's file size was 726 Mb), in tried the alternate version (about 686 Mb) which i burned into a CD. During the installation process, i manually partitioned the hard drive using ext3. After the installation, the CD ejects itself and i hit "continue". In the k boot prompt i hit "enter" on my the keyboard. I see i welcome message in a dark background (a kind of full screen terminal/console called CLI i think) and was asked to enter my user name and password. After that, instead of booting into the beautiful desktop of Ubuntu, i saw this: ubuntu@my_name$ It was like a line of command i think, where you can use "sudo" commands. It is called a non-graphic mode i think. My my REAL problem is to enter Ubuntu 11.04 with the chocolate-colored background and the African music drum. I tried many commands to boot into the graphic mode of the system such as Ctr+Alt+F7 but nothing happened. In this case, i just saw numbers showing available blocks. I DON'T UNDERSTAND. I also tried startx and sudo startx without result. I DO bought a PS3 BOTH for gaming and for Ubuntu because i can't afford buying a PC. Please help me with my trouble dear Ubuntu 11.04 users. I am counting on you. All the best Xdm

    Read the article

  • How do I solve "System is running in low graphics issue" in Ubuntu INSTALLER ?

    - by hellodear
    I made a bootable USB for installing Ubuntu 12.04 LTS alongside Windows 8.1. I inserted my USB device and then booted into it. Then it showed me 2 options - 'Try Ubuntu' or 'Install Ubuntu'. Now I press 'Try Ubuntu' and then it says, "The system is running in low graphics mode". Then I press 'OK'. Then it showed me 4 options. Then again I click 'OK'. Then it shows a black screen and nothing happens. I have tried all possible answers provided in AU. What should I do? Please help. PS :- I am using Windows 8.1 with dedicated graphic card which is AMD Radeon HD 8670M. I am trying to do this in a Dell Laptop 3537 Inspiron. UPDATE :- I tried running the liveUSB session with nomodeset on and i was able to enter the installer. But when I run boot-repair(so that my Ubuntu gets detected in the GRUB menu) after installing Ubuntu successfully alongside Windows 8(following this tutorial with nomodeset on, I get the following error:- your system is running in legacy mode boot repair done

    Read the article

  • 3.2.0-31-generic version doesn't boot

    - by user92526
    12.04 used to work just fine a month ago. Since August, I installed all of the available updates. Since then my mac-pro is acting weird. If I restart the computer, the screen gets stuck in the purple back ground without any texts. Then if I restart again, I can choose what versions of linux I want to run. If I select 3.2.0-31-generic, the machine gets stuck in a state of blinking cursor. If I boot again to run 3.2.0-31-generic recovery mode, the machine gets stuck in a "..... memory freed End of Stack" mode. I can't boot through this kernel. I have to choose an older version of linux like 3.2.0-12-generic to boot into my mac pro, but I have too boot twice to get into this version. I was wondering --if you could provide me a solution to boot the 3.2.0-31-generic version. -- if you could provide me a solution to boot into any version the first time I restart. (Usually I have to kill the first restart) --- If there is an option in linux to choose what version I want to run every time I restart so that I don't have to choose everytime. Thank you

    Read the article

  • After tarball restore my PC (tar xvfpz backup.tgz -C /), my sound card and network are not working. How to detect?

    - by axton hunger
    1 . I have a old laptop I installed Ubuntu 12.04 on. (It was ACER) 2 . I booted into single user mode and backed it up via cd / sudo -i tar cvpzf backup.tgz --exclude=/proc --exclude=/dev --exclude=/lost+found --exclude=/backup.tgz --exclude=/mnt --exclude=/sys / 3 . I installed a fresh copy of Ubuntu 1204 on my new laptop (It is Dell) 4 . I boot into single user mode 5 . I backup the existing /boot directory 6 . I untar my backup to restore on to the Dell sudo tar xvfpz backup.tgz -C / 7 . I restore the previous /boot directory again 8 . I boot it up, and my profile and settings are loaded ok but, Ubuntu shows that there is no Sound Card.. I cannot use unity to drag and change volume. I noticed that the network card also doesnt work. ** How do you make ubuntu recognize changed hardware, if the hardware is already configured for a different laptop? Does anyone know?**

    Read the article

  • How to configure multiple WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • git | error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied [SOLVED]

    - by Corbin Tarrant
    I am having a strange issue that I can't seem to resolve. Here is what happend: I had some log files in a github repository that I didn't want there. I found this script that removes files completely from git history like so: #!/bin/bash set -o errexit # Author: David Underhill # Script to permanently delete files/folders from your git repository. To use # it, cd to your repository's root and then run the script with a list of paths # you want to delete, e.g., git-delete-history path1 path2 if [ $# -eq 0 ]; then exit 0are still fi # make sure we're at the root of git repo if [ ! -d .git ]; then echo "Error: must run this script from the root of a git repository" exit 1 fi # remove all paths passed as arguments from the history of the repo files=$@ git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch $files" HEAD # remove the temporary history git-filter-branch otherwise leaves behind for a long time rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune I, of course, made a backup first and then tried it. It seemed to work fine. I then did a git push -f and was greeted with the following messages: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything seems to have pushed fine though, because the files seem to be gone from the GitHub repository, if I try and push again I get the same thing: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything up-to-date EDIT $ sudo chgrp {user} .git/logs/refs/remotes/origin/master $ sudo chown {user} .git/logs/refs/remotes/origin/master $ git push Everything up-to-date Thanks! EDIT Uh Oh. Problem. I've been working on this project all night and just went to commit my changes: error: Unable to append to .git/logs/refs/heads/master: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/refs/heads/master sudo chgrp {user} .git/logs/refs/heads/master I try the commit again and I get: error: Unable to append to .git/logs/HEAD: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/HEAD sudo chgrp {user} .git/logs/HEAD And then I try the commit again: 16 files changed, 499 insertions(+), 284 deletions(-) create mode 100644 logs/DBerrors.xsl delete mode 100644 logs/emptyPHPerrors.php create mode 100644 logs/trimXMLerrors.php rewrite public/codeCore/Classes/php/DatabaseConnection.php (77%) create mode 100644 public/codeSite/php/init.php $ git push Counting objects: 49, done. Delta compression using up to 2 threads. Compressing objects: 100% (27/27), done. Writing objects: 100% (27/27), 7.72 KiB, done. Total 27 (delta 15), reused 0 (delta 0) To [email protected]:IAmCorbin/MooKit.git 59da24e..68b6397 master -> master Hooray. I jump on http://GitHub.com and check out the repository, and my latest commit is no where to be found. ::scratch head:: So I push again: Everything up-to-date Umm...it doesn't look like it. I've never had this issue before, could this be a problem with github? or did I mess something up with my git project? EDIT Nevermind, I did a simple: git push origin master and it pushed fine.

    Read the article

  • Configuring multiple WCF binding configurations for the same scheme doesn't work

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" bindingConfiguration="public" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" bindingConfiguration="public" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • How to configurie multiple distinct WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Mac 10.6 Universal Binary scipy: cephes/specfun "_aswfa_" symbol not found

    - by Markus
    Hi folks, I can't get scipy to function in 32 bit mode when compiled as a i386/x86_64 universal binary, and executed on my 64 bit 10.6.2 MacPro1,1. My python setup With the help of this answer, I built a 32/64 bit intel universal binary of python 2.6.4 with the intention of using the arch command to select between the architectures. (I managed to make some universal binaries of a few libraries I wanted using lipo.) That all works. I then installed scipy according to the instructions on hyperjeff's article, only with more up-to-date numpy (1.4.0) and skipping the bit about moving numpy aside briefly during the installation of scipy. Now, everything except scipy seems to be working as far as I can tell, and I can indeed select between 32 and 64 bit mode using arch -i386 python and arch -x86_64 python. The error Scipy complains in 32 bit mode: $ arch -x86_64 python -c "import scipy.interpolate; print 'success'" success $ arch -i386 python -c "import scipy.interpolate; print 'success'" Traceback (most recent call last): File "<string>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/__init__.py", line 7, in <module> from interpolate import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/interpolate.py", line 13, in <module> import scipy.special as spec File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/__init__.py", line 8, in <module> from basic import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/basic.py", line 8, in <module> from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _aswfa_ Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Attempt at tracking down the problem It looks like scipy.interpolate imports something called _cephes, which looks for a symbol called _aswfa_ but can't find it in 32 bit mode. Browsing through scipy's source, I find an ASWFA subroutine in specfun.f. The only scipy product file with a similar name is specfun.so, but both that and _cephes.so appear to be universal binaries: $ cd /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/ $ file _cephes.so specfun.so _cephes.so: Mach-O universal binary with 2 architectures _cephes.so (for architecture i386): Mach-O bundle i386 _cephes.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 specfun.so: Mach-O universal binary with 2 architectures specfun.so (for architecture i386): Mach-O bundle i386 specfun.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 Ho hum. I'm stuck. Things I may try but haven't figured out how yet include compiling specfun.so myself manually, somehow. I would imagine that scipy isn't broken for all 32 bit machines, so I guess something is wrong with the way I've installed it, but I can't figure out what. I don't really expect a full answer given my fairly unique (?) setup, but if anyone has any clues that might point me in the right direction, they'd be greatly appreciated. (edit) More details to address questions: I'm using gfortran (GNU Fortran from GCC 4.2.1 Apple Inc. build 5646). Python 2.6.4 was installed more-or-less like so: cd /tmp curl -O http://www.python.org/ftp/python/2.6.4/Python-2.6.4.tar.bz2 tar xf Python-2.6.4.tar.bz2 cd Python-2.6.4 # Now replace buggy pythonw.c file with one that supports the "arch" command: curl http://bugs.python.org/file14949/pythonw.c | sed s/2.7/2.6/ > Mac/Tools/pythonw.c ./configure --enable-framework=/Library/Frameworks --enable-universalsdk=/ --with-universal-archs=intel make -j4 sudo make frameworkinstall Scipy 0.7.1 was installed pretty much as described as here, but it boils down to a simple sudo python setup.py install. It would indeed appear that the symbol is undefined in the i386 architecture if you look at the _cephes library with nm, as suggested by David Cournapeau: $ nm -arch x86_64 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ 00000000000d4950 T _aswfa_ 000000000011e4b0 d _oblate_aswfa_data 000000000011e510 d _oblate_aswfa_nocv_data (snip) $ nm -arch i386 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ U _aswfa_ 0002e96c d _oblate_aswfa_data 0002e99c d _oblate_aswfa_nocv_data (snip) however, I can't yet explain its absence.

    Read the article

  • Ivy resolve not working with dynamic artifact

    - by richever
    I've been using Ivy a bit but I seem to still have a lot to learn. I have two projects. One is a web app and the other is a library upon which the web app depends. The set up is that the library project is compiled to a jar file and published using Ivy to a directory within the project. In the web app build file, I have an ant target that calls the Ivy resolve ant task. What I'd like to do is have the web app using the dynamic resolve mode during development (on developer's local machines) and default resolve mode for test and production builds. Previously I was appending a time stamp to the library archive file so that Ivy would notice changes in file when the web app tried to resolve its dependency on it. Within Eclipse this is cumbersome because, in the web app, the project had to be refreshed and the build path tweaked every time a new library jar was published. Publishing a similarly named jar file every time would, I figure, only require developers to refresh the project. The problem is that the web app is unable to retrieve the dynamic jar file. The output I get looks something like this: resolve: [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:configure] :: loading settings :: file = /Users/richard/workspace/webapp/web/WEB-INF/config/ivy/ivysettings.xml [ivy:resolve] :: resolving dependencies :: com.webapp#webapp;[email protected] [ivy:resolve] confs: [default] [ivy:resolve] found com.webapp#library;latest.integration in local [ivy:resolve] :: resolution report :: resolve 142ms :: artifacts dl 0ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 0 | 0 | --------------------------------------------------------------------- [ivy:resolve] [ivy:resolve] :: problems summary :: [ivy:resolve] :::: WARNINGS [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: UNRESOLVED DEPENDENCIES :: [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: com.webapp#library;latest.integration: impossible to resolve dynamic revision [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :::: ERRORS [ivy:resolve] impossible to resolve dynamic revision for com.webapp#library;latest.integration: check your configuration and make sure revision is part of your pattern [ivy:resolve] [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED /Users/richard/workspace/webapp/build.xml:71: impossible to resolve dependencies: resolve failed - see output for details The web app resolve target looks like this: <target name="resolve" depends="load-ivy"> <ivy:configure file="${ivy.dir}/ivysettings.xml" /> <ivy:resolve file="${ivy.dir}/ivy.xml" resolveMode="${ivy.resolve.mode}"/> <ivy:retrieve pattern="${lib.dir}/[artifact]-[revision].[ext]" type="jar" sync="true" /> </target> In this case, ivy.resolve.mode has a value of 'dynamic' (without quotes). The web app's Ivy file is simple. It looks like this: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.webapp" module="webapp"/> <dependencies> <dependency name="library" rev="${ivy.revision.default}" revConstraint="${ivy.revision.dynamic}" /> </dependencies> </ivy-module> During development, ivy.revision.dynamic has a value of 'latest.integration'. While, during production or test, 'ivy.revision.default' has a value of '1.0'. Any ideas? Please let me know if there's any more information I need to supply. Thanks!

    Read the article

  • Dynamic Scoped Resources in WPF/XAML?

    - by firoso
    I have 2 Xaml files, one containing a DataTemplate which has a resource definition for an Image brush, and the other containing a content control which presents this DataTemplate. The data template is bound to a view model class. Everything seems to work EXCEPT the ImageBrush resource, which just shows up white... Any ideas? File 1: DataTemplate for ViewModel <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:vm="clr-namespace:SEL.MfgTestDev.ESS.ViewModel" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d"> <DataTemplate DataType="{x:Type vm:PresenterViewModel}"> <DataTemplate.Resources> <ImageBrush x:Key="PresenterTitleBarFillBrush" TileMode="Tile" Viewbox="{Binding Path=FillBrushDimensions, Mode=Default}" ViewboxUnits="Absolute" Viewport="{Binding Path=FillBrushPatternSize, Mode=Default}" ViewportUnits="Absolute" ImageSource="{Binding Path=FillImage, Mode=Default}"/> </DataTemplate.Resources> <Grid d:DesignWidth="1440" d:DesignHeight="900"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="192"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="120"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <DockPanel HorizontalAlignment="Stretch" Width="Auto" LastChildFill="True" Background="{x:Null}" Grid.ColumnSpan="2"> <Image Source="{Binding Path=ImageSource, Mode=Default}"/> <Rectangle Fill="{DynamicResource PresenterTitleBarFillBrush}"/> </DockPanel> </Grid> </DataTemplate> </ResourceDictionary> File 2: Main Window Class which instanciates the DataTemplate Via it's view model. <Window x:Class="SEL.MfgTestDev.ESS.ESSMainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:vm="clr-namespace:SEL.MfgTestDev.ESS.ViewModel" Title="ESS Control Window" Height="900" Width="1440" WindowState="Maximized" WindowStyle="None" ResizeMode="NoResize" DataContext="{Binding}"> <Window.Resources> <ResourceDictionary Source="PresenterViewModel.xaml" /> </Window.Resources> <ContentControl> <ContentControl.Content> <vm:PresenterViewModel ImageSource="XAMLResources\SEL25YearsTitleBar.bmp" FillImage="XAMLResources\SEL25YearsFillPattern.bmp" FillBrushDimensions="0,0,5,110" FillBrushPatternSize="0,0,5,120"/> </ContentControl.Content> </ContentControl> </Window> And for the sake of completeness! The CodeBehind for the View Model using System; using System.Collections.Generic; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Shapes; namespace SEL.MfgTestDev.ESS.ViewModel { public class PresenterViewModel : ViewModelBase { public PresenterViewModel() { } //DataBindings private ImageSource _imageSource; public ImageSource ImageSource { get { return _imageSource; } set { if (_imageSource != value) { _imageSource = value; OnPropertyChanged("ImageSource"); } } } private Rect _fillBrushPatternSize; public Rect FillBrushPatternSize { get { return _fillBrushPatternSize; } set { if (_fillBrushPatternSize != value) { _fillBrushPatternSize = value; OnPropertyChanged("FillBrushPatternSize"); } } } private Rect _fillBrushDimensions; public Rect FillBrushDimensions { get { return _fillBrushDimensions; } set { if (_fillBrushDimensions != value) { _fillBrushDimensions = value; OnPropertyChanged("FillBrushDimensions"); } } } private ImageSource _fillImage; public ImageSource FillImage { get { return _fillImage; } set { if (_fillImage != value) { _fillImage = value; OnPropertyChanged("FillImage"); } } } } }

    Read the article

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • Snort's problems in generating alert from Darpa 1998 intrusion detection dataset.

    - by manofseven2
    Hi. I’m working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don’t generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz -———————————————————————— Command line: snort_2.8.6 c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 -————————————————————————— Snort.config Hi. I'm working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don't generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz Command line: snort_2.8.6 -c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 Snort.config # Setup the network addresses you are protecting var HOME_NET any # Set up the external network addresses. Leave as "any" in most situations var EXTERNAL_NET any # List of DNS servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of ssh servers on your network var SSH_SERVERS $HOME_NET # List of ports you run web servers on portvar HTTP_PORTS [80,1220,2301,3128,7777,7779,8000,8008,8028,8080,8180,8888,9999] # List of ports you want to look for SHELLCODE on. portvar SHELLCODE_PORTS !80 # List of ports you might see oracle attacks on portvar ORACLE_PORTS 1024: # List of ports you want to look for SSH connections on: portvar SSH_PORTS 22 # other variables, these should not be modified var AIM_SERVERS [64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24] var RULE_PATH ../rules var SO_RULE_PATH ../so_rules var PREPROC_RULE_PATH ../preproc_rules # Stop generic decode events: config disable_decode_alerts # Stop Alerts on experimental TCP options config disable_tcpopt_experimental_alerts # Stop Alerts on obsolete TCP options config disable_tcpopt_obsolete_alerts # Stop Alerts on T/TCP alerts config disable_tcpopt_ttcp_alerts # Stop Alerts on all other TCPOption type events: config disable_tcpopt_alerts # Stop Alerts on invalid ip options config disable_ipopt_alerts # Alert if value in length field (IP, TCP, UDP) is greater th elength of the packet # config enable_decode_oversized_alerts # Same as above, but drop packet if in Inline mode (requires enable_decode_oversized_alerts) # config enable_decode_oversized_drops # Configure IP / TCP checksum mode config checksum_mode: all config pcre_match_limit: 1500 config pcre_match_limit_recursion: 1500 # Configure the detection engine See the Snort Manual, Configuring Snort - Includes - Config config detection: search-method ac-split search-optimize max-pattern-len 20 # Configure the event queue. For more information, see README.event_queue config event_queue: max_queue 8 log 3 order_events content_length dynamicpreprocessor directory D:\programs\Snort_2.8.6\snort\lib\snort_dynamicpreprocessor dynamicengine D:\programs\Snort_2.8.6\snort\lib\snort_dynamicengine\sf_engine.dll # path to dynamic rules libraries #dynamicdetection directory /usr/local/lib/snort_dynamicrules preprocessor frag3_global: max_frags 65536 preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180 preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp yes, track_icmp no preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \ overlap_limit 10, small_segments 3 bytes 150, timeout 180, \ ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \ 161 445 513 514 587 593 691 1433 1521 2100 3306 6665 6666 6667 6668 6669 \ 7000 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \ ports both 80 443 465 563 636 989 992 993 994 995 1220 2301 3128 6907 7702 7777 7779 7801 7900 7901 7902 7903 7904 7905 \ 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 8000 8008 8028 8080 8180 8888 9999 preprocessor stream5_udp: timeout 180 preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480 decompress_depth 20480 preprocessor http_inspect_server: server default \ chunk_length 500000 \ server_flow_depth 0 \ client_flow_depth 0 \ post_depth 65495 \ oversize_dir_length 500 \ max_header_length 750 \ max_headers 100 \ ports { 80 1220 2301 3128 7777 7779 8000 8008 8028 8080 8180 8888 9999 } \ non_rfc_char { 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 } \ enable_cookie \ extended_response_inspection \ inspect_gzip \ apache_whitespace no \ ascii no \ bare_byte no \ directory no \ double_decode no \ iis_backslash no \ iis_delimiter no \ iis_unicode no \ multi_slash no \ non_strict \ u_encode yes \ webroot no preprocessor rpc_decode: 111 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 no_alert_multiple_requests no_alert_large_fragments no_alert_incomplete preprocessor bo preprocessor ftp_telnet: global inspection_type stateful encrypted_traffic no preprocessor ftp_telnet_protocol: telnet \ ayt_attack_thresh 20 \ normalize ports { 23 } \ detect_anomalies preprocessor ftp_telnet_protocol: ftp server default \ def_max_param_len 100 \ ports { 21 2100 3535 } \ telnet_cmds yes \ ignore_telnet_erase_cmds yes \ ftp_cmds { ABOR ACCT ADAT ALLO APPE AUTH CCC CDUP } \ ftp_cmds { CEL CLNT CMD CONF CWD DELE ENC EPRT } \ ftp_cmds { EPSV ESTA ESTP FEAT HELP LANG LIST LPRT } \ ftp_cmds { LPSV MACB MAIL MDTM MIC MKD MLSD MLST } \ ftp_cmds { MODE NLST NOOP OPTS PASS PASV PBSZ PORT } \ ftp_cmds { PROT PWD QUIT REIN REST RETR RMD RNFR } \ ftp_cmds { RNTO SDUP SITE SIZE SMNT STAT STOR STOU } \ ftp_cmds { STRU SYST TEST TYPE USER XCUP XCRC XCWD } \ ftp_cmds { XMAS XMD5 XMKD XPWD XRCP XRMD XRSQ XSEM } \ ftp_cmds { XSEN XSHA1 XSHA256 } \ alt_max_param_len 0 { ABOR CCC CDUP ESTA FEAT LPSV NOOP PASV PWD QUIT REIN STOU SYST XCUP XPWD } \ alt_max_param_len 200 { ALLO APPE CMD HELP NLST RETR RNFR STOR STOU XMKD } \ alt_max_param_len 256 { CWD RNTO } \ alt_max_param_len 400 { PORT } \ alt_max_param_len 512 { SIZE } \ chk_str_fmt { ACCT ADAT ALLO APPE AUTH CEL CLNT CMD } \ chk_str_fmt { CONF CWD DELE ENC EPRT EPSV ESTP HELP } \ chk_str_fmt { LANG LIST LPRT MACB MAIL MDTM MIC MKD } \ chk_str_fmt { MLSD MLST MODE NLST OPTS PASS PBSZ PORT } \ chk_str_fmt { PROT REST RETR RMD RNFR RNTO SDUP SITE } \ chk_str_fmt { SIZE SMNT STAT STOR STRU TEST TYPE USER } \ chk_str_fmt { XCRC XCWD XMAS XMD5 XMKD XRCP XRMD XRSQ } \ chk_str_fmt { XSEM XSEN XSHA1 XSHA256 } \ cmd_validity ALLO \ cmd_validity EPSV \ cmd_validity MACB \ cmd_validity MDTM \ cmd_validity MODE \ cmd_validity PORT \ cmd_validity PROT \ cmd_validity STRU \ cmd_validity TYPE preprocessor ftp_telnet_protocol: ftp client default \ max_resp_len 256 \ bounce yes \ ignore_telnet_erase_cmds yes \ telnet_cmds yes preprocessor smtp: ports { 25 465 587 691 } \ inspection_type stateful \ normalize cmds \ normalize_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ max_command_line_len 512 \ max_header_line_len 1000 \ max_response_line_len 512 \ alt_max_command_line_len 260 { MAIL } \ alt_max_command_line_len 300 { RCPT } \ alt_max_command_line_len 500 { HELP HELO ETRN EHLO } \ alt_max_command_line_len 255 { EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET } \ alt_max_command_line_len 246 { SEND SAML SOML AUTH TURN ETRN DATA RSET QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ valid_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ xlink2state { enabled } preprocessor ssh: server_ports { 22 } \ autodetect \ max_client_bytes 19600 \ max_encrypted_packets 20 \ max_server_version_len 100 \ enable_respoverflow enable_ssh1crc32 \ enable_srvoverflow enable_protomismatch preprocessor dcerpc2: memcap 102400, events [co ] preprocessor dcerpc2_server: default, policy WinXP, \ detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \ autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \ smb_max_chain 3 preprocessor dns: ports { 53 } enable_rdata_overflow preprocessor ssl: ports { 443 465 563 636 989 992 993 994 995 7801 7702 7900 7901 7902 7903 7904 7905 7906 6907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 }, trustservers, noinspect_encrypted # SDF sensitive data preprocessor. For more information see README.sensitive_data preprocessor sensitive_data: alert_threshold 25 output alert_full: alert.log output database: log, mysql, user=root password=123456 dbname=snort host=localhost include classification.config include reference.config include $RULE_PATH/local.rules include $RULE_PATH/attack-responses.rules include $RULE_PATH/backdoor.rules include $RULE_PATH/bad-traffic.rules include $RULE_PATH/chat.rules include $RULE_PATH/content-replace.rules include $RULE_PATH/ddos.rules include $RULE_PATH/dns.rules include $RULE_PATH/dos.rules include $RULE_PATH/exploit.rules include $RULE_PATH/finger.rules include $RULE_PATH/ftp.rules include $RULE_PATH/icmp.rules include $RULE_PATH/icmp-info.rules include $RULE_PATH/imap.rules include $RULE_PATH/info.rules include $RULE_PATH/misc.rules include $RULE_PATH/multimedia.rules include $RULE_PATH/mysql.rules include $RULE_PATH/netbios.rules include $RULE_PATH/nntp.rules include $RULE_PATH/oracle.rules include $RULE_PATH/other-ids.rules include $RULE_PATH/p2p.rules include $RULE_PATH/policy.rules include $RULE_PATH/pop2.rules include $RULE_PATH/pop3.rules include $RULE_PATH/rpc.rules include $RULE_PATH/rservices.rules include $RULE_PATH/scada.rules include $RULE_PATH/scan.rules include $RULE_PATH/shellcode.rules include $RULE_PATH/smtp.rules include $RULE_PATH/snmp.rules include $RULE_PATH/specific-threats.rules include $RULE_PATH/spyware-put.rules include $RULE_PATH/sql.rules include $RULE_PATH/telnet.rules include $RULE_PATH/tftp.rules include $RULE_PATH/virus.rules include $RULE_PATH/voip.rules include $RULE_PATH/web-activex.rules include $RULE_PATH/web-attacks.rules include $RULE_PATH/web-cgi.rules include $RULE_PATH/web-client.rules include $RULE_PATH/web-coldfusion.rules include $RULE_PATH/web-frontpage.rules include $RULE_PATH/web-iis.rules include $RULE_PATH/web-misc.rules include $RULE_PATH/web-php.rules include $RULE_PATH/x11.rules include threshold.conf -————————————————————————————- Can anyone help me to solve this problem? Thanks.

    Read the article

  • Our Look at the Internet Explorer 9 Platform Preview

    - by Asian Angel
    Have you been hearing all about Microsoft’s work on Internet Explorer 9 and are curious about it? If you are wanting a taste of the upcoming release then join us as we take a look at the Internet Explorer 9 Platform Preview. Note: Windows Vista and Server 2008 users may need to install a Platform Update (see link at bottom for more information). Getting Started If you are curious about the systems that the platform preview will operate on here is an excerpt from the FAQ page (link provided below). There are two important points of interest here: The platform preview does not replace your regular Internet Explorer installation The platform preview (and the final version of Internet Explorer 9) will not work on Windows XP There really is not a lot to the install process…basically all that you will have to deal with is the “EULA Window” and the “Install Finished Window”. Note: The platform preview will install to a “Program Files Folder” named “Internet Explorer Platform Preview”. Internet Explorer 9 Platform Preview in Action When you start the platform preview up for the first time you will be presented with the Internet Explorer 9 Test Drive homepage. Do not be surprised that there is not a lot to the UI at this time…but you can get a good idea of how Internet Explorer will act. Note: You will not be able to alter the “Homepage” for the platform preview. Of the four menus available there are two that will be of interest to most people…the “Page & Debug Menus”. If you go to navigate to a new webpage you will need to go through the “Page Menu” unless you have installed the Address Bar Mini-Tool (shown below). Want to see what a webpage will look like in an older version of Internet Explorer? Then choose your version in the “Debug Menu”. We did find it humorous that IE6 was excluded from the choices offered. Here is what the URL entry window looks like if you are using the “Page Menu” to navigate between websites. Here is the main page of the site here displayed in “IE9 Mode”…looking good. Here is the main page viewed in “Forced IE5 Document Mode”. There were some minor differences (colors, sidebar, etc.) in how the main page displayed in comparison to “IE9 Mode”. Being able to switch between modes makes for an interesting experience… As you can see there is not much to the “Context Menu” at the moment. Notice the slightly altered icon for the platform preview… “Add” an Address Bar of Sorts If you would like to use a “make-shift” Address Bar with the platform preview you can set up the portable file (IE9browser.exe) for the Internet Explorer 9 Test Platform Addressbar Mini-Tool. Just place it in an appropriate folder, create a shortcut for it, and it will be ready to go. Here is a close look at the left side of the Address Bar Mini-Tool. You can try to access “IE Favorites” but may have sporadic results like those we experienced during our tests. Note: The Address Bar Mini-Tool will not line up perfectly with the platform preview but still makes a nice addition. And a close look at the right side of the Address Bar Mini-Tool. In order to completely shut down the Address Bar Mini-Tool you will need to click on “Close”. Each time that you enter an address into the Address Bar Mini-Tool it will open a new window/instance of the platform preview. Note: During our tests we noticed that clicking on “Home” in the “Page Menu” opened the previously viewed website but once we closed and restarted the platform preview the test drive website was the starting/home page again. Even if the platform preview is not running the Address Bar Mini-Tool can still run as shown here. Note: You will not be able to move the Address Bar Mini-Tool from its’ locked-in position at the top of the screen. Now for some fun. With just the Address Bar Mini-Tool open you can enter an address and cause the platform preview to open. Here is our example from above now open in the platform preview…good to go. Conclusion During our tests we did experience the occasional crash but overall we were pleased with the platform preview’s performance. The platform preview handled rather well and definitely seemed much quicker than Internet Explorer 8 on our test system (a definite bonus!). If you are an early adopter then this could certainly get you in the mood for the upcoming beta releases! Links Download the Internet Explorer 9 Preview Platform Download the Internet Explorer 9 Test Platform Addressbar Mini-Tool Information about Platform Update for Windows Vista & Server 2008 View the Internet Explorer 9 Platform Preview FAQ Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer?Prevent Firefox or Internet Explorer from Printing the URL on Every Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • 8 Things You Can Do In Android’s Developer Options

    - by Chris Hoffman
    The Developer Options menu in Android is a hidden menu with a variety of advanced options. These options are intended for developers, but many of them will be interesting to geeks. You’ll have to perform a secret handshake to enable the Developer Options menu in the Settings screen, as it’s hidden from Android users by default. Follow the simple steps to quickly enable Developer Options. Enable USB Debugging “USB debugging” sounds like an option only an Android developer would need, but it’s probably the most widely used hidden option in Android. USB debugging allows applications on your computer to interface with your Android phone over the USB connection. This is required for a variety of advanced tricks, including rooting an Android phone, unlocking it, installing a custom ROM, or even using a desktop program that captures screenshots of your Android device’s screen. You can also use ADB commands to push and pull files between your device and your computer or create and restore complete local backups of your Android device without rooting. USB debugging can be a security concern, as it gives computers you plug your device into access to your phone. You could plug your device into a malicious USB charging port, which would try to compromise you. That’s why Android forces you to agree to a prompt every time you plug your device into a new computer with USB debugging enabled. Set a Desktop Backup Password If you use the above ADB trick to create local backups of your Android device over USB, you can protect them with a password with the Set a desktop backup password option here. This password encrypts your backups to secure them, so you won’t be able to access them if you forget the password. Disable or Speed Up Animations When you move between apps and screens in Android, you’re spending some of that time looking at animations and waiting for them to go away. You can disable these animations entirely by changing the Window animation scale, Transition animation scale, and Animator duration scale options here. If you like animations but just wish they were faster, you can speed them up. On a fast phone or tablet, this can make switching between apps nearly instant. If you thought your Android phone was speedy before, just try disabling animations and you’ll be surprised how much faster it can seem. Force-Enable FXAA For OpenGL Games If you have a high-end phone or tablet with great graphics performance and you play 3D games on it, there’s a way to make those games look even better. Just go to the Developer Options screen and enable the Force 4x MSAA option. This will force Android to use 4x multisample anti-aliasing in OpenGL ES 2.0 games and other apps. This requires more graphics power and will probably drain your battery a bit faster, but it will improve image quality in some games. This is a bit like force-enabling antialiasing using the NVIDIA Control Panel on a Windows gaming PC. See How Bad Task Killers Are We’ve written before about how task killers are worse than useless on Android. If you use a task killer, you’re just slowing down your system by throwing out cached data and forcing Android to load apps from system storage whenever you open them again. Don’t believe us? Enable the Don’t keep activities option on the Developer options screen and Android will force-close every app you use as soon as you exit it. Enable this app and use your phone normally for a few minutes — you’ll see just how harmful throwing out all that cached data is and how much it will slow down your phone. Don’t actually use this option unless you want to see how bad it is! It will make your phone perform much more slowly — there’s a reason Google has hidden these options away from average users who might accidentally change them. Fake Your GPS Location The Allow mock locations option allows you to set fake GPS locations, tricking Android into thinking you’re at a location where you actually aren’t. Use this option along with an app like Fake GPS location and you can trick your Android device and the apps running on it into thinking you’re at locations where you actually aren’t. How would this be useful? Well, you could fake a GPS check-in at a location without actually going there or confuse your friends in a location-tracking app by seemingly teleporting around the world. Stay Awake While Charging You can use Android’s Daydream Mode to display certain apps while charging your device. If you want to force Android to display a standard Android app that hasn’t been designed for Daydream Mode, you can enable the Stay awake option here. Android will keep your device’s screen on while charging and won’t turn it off. It’s like Daydream Mode, but can support any app and allows users to interact with them. Show Always-On-Top CPU Usage You can view CPU usage data by toggling the Show CPU usage option to On. This information will appear on top of whatever app you’re using. If you’re a Linux user, the three numbers on top probably look familiar — they represent the system load average. From left to right, the numbers represent your system load over the last one, five, and fifteen minutes. This isn’t the kind of thing you’d want enabled most of the time, but it can save you from having to install third-party floating CPU apps if you want to see CPU usage information for some reason. Most of the other options here will only be useful to developers debugging their Android apps. You shouldn’t start changing options you don’t understand. If you want to undo any of these changes, you can quickly erase all your custom options by sliding the switch at the top of the screen to Off.     

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • ASP.NET, HTTP 404 and SEO

    - by paxer
    The other day our SEO Manager told me that he is not happy about the way ASP.NET application return HTTP response codes for Page Not Found (404) situation. I've started research and found interesting things, which could probably help others in similar situation.  1) By default ASP.NET application handle 404 error by using next web.config settings           <customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.html"/>           </customErrors> However this approach has a problem, and this is actually what our SEO manager was talking about. This is what HTTP return to request in case of Page not Found situation. So first of all it return HTTP 302 Redirect code and then HTTP 200 - ok code. The problem : We need to have HTTP 404 response code at the end of response for SEO purposes.  Solution 1 Let's change a bit our web.config settings to handle 404 error not on static html page but on .aspx page      <customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.aspx"/>           </customErrors> And now let's add in Page_Load event on 404.aspx page next lines     protected void Page_Load(object sender, EventArgs e)             {                 Response.StatusCode = 404;             } Now let's run our test again Now it has got better, last HTTP response code is 404, but my SEO manager still was not happy, becouse we still have 302 code before it, and as he said this is bad for Google search optimization. So we need to have only 404 HTTP code alone. Solution 2 Let's comment our web.config settings     <!--<customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.html"/>           </customErrors>--> Now, let's open our Global.asax file, or if it does not exist in your project - add it. Then we need to add next logic which will detect if server error code is 404 (Page not found) then handle it.       protected void Application_Error(object sender, EventArgs e)             {                            Exception ex = Server.GetLastError();                 if (ex is HttpException)                 {                     if (((HttpException)(ex)).GetHttpCode() == 404)                         Server.Transfer("~/404.html");                 }                 // Code that runs when an unhandled error occurs                 Server.Transfer("~/GenericError.htm");                  } Cool, now let's start our test again... Yehaa, looks like now we have only 404 HTTP response code, SEO manager and Google are happy and so do i:) Hope this helps!  

    Read the article

  • Oracle Database 12c: Oracle Multitenant Option

    - by hamsun
    1. Why ? 2. What is it ? 3. How ? 1. Why ? The main idea of the 'grid' is to share resources, to make better use of storage, CPU and memory. If a database administrator wishes to implement this idea, he or she must consolidate many databases to one database. One of the concerns of running many applications together in one database is: ‚what will happen, if one of the applications must be restored because of a human error?‘ Tablespace point in time recovery can be used for this purpose, but there are a few prerequisites. Most importantly the tablespaces are strictly separated for each application. Another reason for creating separated databases is security: each customer has his own database. Therefore, there is often a proliferation of smaller databases. Each of them must be maintained, upgraded, each allocates virtual memory and runs background processes thereby wasting resources. Oracle 12c offers another possibility for virtualization, providing isolation at the database level: the multitenant container database holding pluggable databases. 2. What ? Pluggable databases are logical units inside a multitenant container database, which consists of one multitenant container database and up to 252 pluggable databases. The SGA is shared as are the background processes. The multitenant container database holds metadata information common for pluggable databases inside the System and the Sysaux tablespace, and there is just one Undo tablespace. The pluggable databases have smaller System and Sysaux tablespaces, containing just their 'personal' metadata. New data dictionary views will make the information available either on pdb (dba_views) or container level (cdb_views). There are local users, which are known in specific pluggable databases and common users known in all containers. Pluggable databases can be easily plugged to another multitenant container database and converted from a non-CDB. They can undergo point in time recovery. 3. How ? Creating a multitenant container database can be done using the database configuration assistant: There you find the new option: Create as Container Database. If you prefer ‚hand made‘ databases you can execute the command from a instance in nomount state: CREATE DATABASE cdb1 ENABLE PLUGGABLE DATABASE …. And of course this can also be achieved through Enterprise Manager Cloud. A freshly created multitenant container database consists of two containers: the root container as the 'rack' and a seed container, a template for future pluggable databases. There are 4 ways to create other pluggable databases: 1. Create an empty pdb from seed 2. Plug in a non-CDB 3. Move a pdb from another pdb 4. Copy a pdb from another pdb We will discuss option2: how to plug in a non_CDB into a multitenant container database. Three different methods are available : 1. Create an empty pdb and use Datapump in traditional export/import mode or with Transportable Tablespace or Database mode. This method is suitable for pre 12c databases. 2. Create an empty pdb and use GoldenGate replication. When the pdb catches up with the non-CDB, you fail over to the pdb. 3. Databases of Version 12c or higher can be plugged in with the help of the new dbms_pdb Package. This is a demonstration for method 3: Step1: Connect to the non-CDB to be plugged in and create an xml File with description of the database. The xml file is written to $ORACLE_HOME/dbs per default and contains mainly information about the datafiles. Step 2: Check if the non-CDB is pluggable in the multitenant container database: Step 3: Create the pluggable database, connected to the Multitenant container database. With nocopy option the files will be reused, but the tempfile is created anew: A service is created and registered automatically with the listener: Step 4: Delete unnecessary metadata from PDB SYSTEM tablespace: To connect to newly created pdb, edit tnsnames.ora and add entry for new pdb. Connect to plugged-in non_CDB and clean up Data Dictionary to remove entries now maintained in multitenant container database. As all kept objects have to be recompiled it will take a few minutes. Step 5: The plugged-in database will be automatically synchronised by creating common users and roles when opened the first time in read write mode. Step 6: Verify tablespaces and users: There is only one local tablespace (users) and one local user (scott) in the plugged-in non_CDB pdb_orcl. This method of creating plugged_in non_CDB from is fast and easy for 12c databases. The method for deplugging a pluggable database from a CDB is to create a new non_CDB and use the the new full transportable feature of Datapump and drop the pluggable database. About the Author: Gerlinde has been working for Oracle University Germany as one of our Principal Instructors for over 14 years. She started with Oracle 7 and became an Oracle Certified Master for Oracle 10g and 11c. She is a specialist in Database Core Technologies, with profound knowledge in Backup & Recovery, Performance Tuning for DBAs and Application Developers, Datawarehouse Administration, Data Guard and Real Application Clusters.

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • New Web Comic: Code Monkey Kung Fu

    - by Dane Morgridge
    It’s been something I’ve wanted to do for quite some time and decided it was finally time. Yesterday, I launched a new web comic “Code Monkey Kung Fu”. After being a programer for more than ten years, I’ve come across quite a few hilarious situations and will be drawing on them for inspiration. I also have a four kids, so they will probably produce a lot as well. My plan is to release on Tuesdays with additional comics mixed in on occasion. I hope you enjoy! http://www.codemonkeykungfu.com

    Read the article

  • Technical Screencast Series

    - by Ben Griswold
    Noah and I have started to produce a series of technical screencasts. In the spirit of Dimecasts.net, we’re limiting each episode to ten minutes as we thought the development community could benefit from short, focused episodes. We’re just getting started, but I’m really pleased with our progress and I’m very excited about what’s to come.  The first three episodes are focused on the .NET stack (specifically around Visual Studio Solution Setup, Managing .NET External Dependencies and Working with the ASP.NET Membership Provider) but since we work for a mixed shop of .NET and Java development, I’m sure we’ll eventually introduce all sorts of topics. We’re currently putting together a list of shows. If you have suggestions, please let me know. I plan to post the episodes to johnnycoder as they roll out and who knows?  Maybe your screencast idea will show up next.

    Read the article

  • ADF Desktop Integration Security Explained

    - by juan.ruiz
    ADFdi provides a secure access to spreadsheets within MS-Excel. Developers as well as administrators could wonder how the security features work in this mixed layout -having MS-Excel accessing JavaEE business services? and also what do system administrators should expect when deploying an ADF solution that offers ADFdi capabilities? Shaun Logan from the ADFdi team published an excellent article back in January where you can find in a great detail the ADF desktop integration security features and implementation. You can find the article here: http://www.oracle.com/technology/products/jdev/11/collateral/security%20whitepaper%20for%20adfdi%20r1%20final.pdf Enjoy!

    Read the article

  • GitHub Integration in Windows Azure Web Site

    - by Shaun
    Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches. Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.   It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”. Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing. Then in the deployment page we can see now WAWS supports three Git publishing modes: - Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI. - Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment. - Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.   Now let’s back to GitHub and create a new publish repository. Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks. We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub. You can download the GitHub for Windows here. I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder. We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.   Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories. After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows. So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file. 1: <html> 2: <head> 3: </head> 4: <body> 5: <h1> 6: I came from GitHub, WOW! 7: </h1> 8: </body> 9: </html> Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment. If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well. And here is the page deployed to WAWS.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What Did You Do? is a Bad Question

    - by Ajarn Mark Caldwell
    Brian Moran (blog | Twitter) did a great presentation today for the PASS Professional Development Virtual Chapter on The Art of Questions.  One of the points that Brian made was that there are good questions and bad (or at least not-as-good) questions.  Good questions tend to open-up the conversation and engender positive reactions (perhaps even trust and respect) between the participants; and bad questions tend to close-down a conversation either through the narrow list of possible responses (e.g. strictly Yes/No) or through the negative reactions they can produce.  And this explains why I so frequently had problems troubleshooting real-time problems with users in the past.  I’ll explain that in more detail below, but before we go on, let me recommend that you watch the recording of Brian’s presentation to learn why the question Why is often problematic in the U.S. and yet we so often resort to it. For a short portion (3 years) of my career, I taught basic computer skills and Office applications in an adult vocational school, and this gave me ample opportunity to do live troubleshooting of user challenges with computers.  And like many people who ended up in computer related jobs, I also have had numerous times where I was called upon by less computer-savvy individuals to help them with some challenge they were having, whether it was part of my job or not.  One of the things that I noticed, especially during my time as a teacher, was that when I was helping somebody, typically the first question I would ask them was, “What did you do?”  This seemed to me like a good way to start my detective work trying to figure out what happened, what went wrong, how to fix it, and how to help the person avoid it again in the future.  I always asked it in a polite tone of voice as I was just trying to gather the facts before diving in deeper.  However; 99.999% of the time, I always got the same answer, “Nothing!”  For a long time this frustrated me because (remember I’m in detective mode at that point) I knew it could not possibly be true.  They HAD to have done SOMETHING…just tell me what were the last actions you took before this problem presented itself.  But no, they always stuck with “Nothing”.  At which point, with frustration growing, and not a little bit of disdain for their lack of helpfulness, I would usually ask them to move aside while I took over their machine and got them out of whatever they had gotten themselves into.  After a while I just grew used to the fact that this was the answer I would usually receive, but I always kept asking because for the .001% of the people who would actually tell me, I could then help them understand what went wrong and how to avoid it in the future. Now, after hearing Brian’s talk, I understand what the problem was.  Even though I meant to just be in an information gathering mode, the words I was using, “What did YOU do?” have such a strong negative connotation that people would instinctively go into defense-mode and stop sharing information that might make them look bad.  Many of them probably were not even consciously aware that they had gone on the defensive, but the self-preservation instinct, especially self-preservation of the ego, is so strong that people would end up there without even realizing it. So, if “What did you do” is a bad question, what would have been better?  Well, one suggestion that Brian makes in his talk is something along the lines of, “Can you tell me what led up to this?” or “what was happening on the computer right before this came up?”  It’s subtle, but the point is to take the focus off of the person and their behavior; instead depersonalizing it and talk about events from more of a 3rd-party observer point of view.  With this approach, people will be more likely to talk about what the computer did and what they did in response to it without feeling the interrogation spotlight is on them.  They are also more likely to mention other events that occurred around the same time that may or may not be related, but which could certainly help you troubleshoot a larger problem if it is not just user actions.  And that is the ultimate goal of your asking the questions.  So yes, it does matter how you ask the question; and there are such things as good questions and bad questions.  Excellent topic Brian!  Thanks for getting the thinking gears churning! (Cross-posted to the Professional Development Virtual Chapter blog.)

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >