Search Results

Search found 46079 results on 1844 pages for 'virus name'.

Page 416/1844 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Squeezing all the SEO out of a URL as possible.

    - by John Isaacks
    I am working on an ecommerce site, I told our SEO consultant that I plan to make the URL scheme: /products/<id>/<name>. This is similar to Stackoverflow's URLs which are /questions/<id>/<title>. He asked me if I could change the URL scheme to /p/<id>/<name> instead. I know why he wants this change, the word "products" isn't needed to find the correct product, and it doesn't offer any SEO, so shortening it to just p would make the relevant keywords in the <name> weigh more. His main priority is maximizing SEO, but the part that I don't think he is considering is how this effects the semantics of the site. Also having the word "products" looks like it has meaning and a reason for being there, just having a p looks chaotic and ugly to me. I also don't think it makes that much of a difference does it? Stackoverflow doesn't use /q/<id>/<title> and they do just fine, I do realize that theres many factors at play here though, not just the URL. So I want some outside opinions on which is the better way and why?

    Read the article

  • Mal kurz erklärt: Advanced Security Option (ASO)

    - by Anne Manke
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Heinz-Wilhelm Fabry 12.00 Normal 0 false false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WER? Kunden, die die Oracle Datenbank Enterprise Edition einsetzen und deren Sicherheitsabteilungen bzw. Fachabteilungen die Daten- und/oder Netzwerkverschlüsselung fordern und / oder die personenbezogene Daten in Oracle Datenbanken speichern und / oder die den Zugang zu Datenbanksystemen von der Eingabe Benutzername/Passwort auf Smartcards oder Kerberos umstellen wollen. Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WAS? Durch das Aktivieren der Option Advanced Security können folgende Anforderungen leicht erfüllt werden: Einzelne Tabellenspalten gezielt verschlüsselt ablegen, wenn beispielsweise der Payment Card Industry Data Security Standard (PCI DSS) oder der Europäischen Datenschutzrichtlinie eine Verschlüsselung bestimmter Daten nahelegen Sichere Datenablage – Verschlüsselung aller Anwendungsdaten Keine spürbare Performance-Veränderung Datensicherungen sind automatisch verschlüsselt - Datendiebstahl aus Backups wird verhindert Verschlüsselung der Netzwerkübertragung – Sniffer-Tools können keine lesbaren Daten abgreifen Aktuelle Verschlüsselungsalgorithmen werden genutzt (AES256, 3DES168, u.a.) Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WIE? Die Oracle Advanced Security Option ist ein wichtiger Baustein einer ganzheitlichen Sicherheitsarchitektur. Mit ihr lässt sich das Risiko eines Datenmissbrauchs erheblich reduzieren und implementiert ebenfalls den Schutz vor Nicht-DB-Benutzer, wie „root unter Unix“. Somit kann „root“ nicht mehr unerlaubterweise die Datenbank-Files lesen . ASO deckt den kompletten physikalischen Stack ab. Von der Kommunikation zwischen dem Client und der Datenbank, über das verschlüsselte Ablegen der Daten ins Dateisystem bis hin zur Aufbewahrung der Daten in einem Backupsystem. Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Das BVA (Bundesverwaltungsamt) bietet seinen Kunden mit dem neuen Personalverwaltungssystem EPOS 2.0 mehr Sicherheit durch Oracle Sicherheitstechnologien an. Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Und sonst so? Verschlüsselung des Netzwerkverkehrs Wie beeinflusst die Netzwerkverschlüsselung die Performance? Unsere Kunden bestätigen ständig, dass sie besonders in modernen Mehr-Schichten-Architekturen Anwender kaum Performance-Einbußen feststellen. Falls genauere Daten zur Performance benötigt werden, sind realitätsnahe, kundenspezifische Tests unerlässlich. Verschlüsselung von Anwendungsdaten (Transparent Data Encryption-TDE ) Muss ich meine Anwendungen umschreiben, damit sie TDE nutzen können? NEIN. TDE ist völlig transparent für Ihre Anwendungen. Kann ich nicht auch durch meine Applikation die Daten verschlüsseln? Ja - die Applikationsdaten werden dadurch allerdings nur in LOBs oder Textfeldern gespeichert. Und das hat gravierende Nachteile: Es existieren zum Beispiel keine Datums- /Zahlenfelder. Daraus folgt, dass auf diesen Daten kein sinnvolles Berichtsverfahren funktioniert. Auch können Applikationen nicht mit den Daten arbeiten, die von einer anderen Applikation verschlüsselt wurden. Der wichtigste Aspekt gegen die Verschlüsselung innerhalb einer Applikation ist allerdings die Performanz. Da keine Indizes auf die durch eine Applikation verschlüsselten Daten erstellt werden können, wird die Datenbank bei jedem Zugriff ein Full-Table-Scan durchführen, also jeden Satz der betroffenen Tabelle lesen. Dadurch steigt der Ressourcenbedarf möglicherweise enorm und daraus resultieren wiederum möglicherweise höhere Lizenzkosten. Mit ASO verschlüsselte Daten können von der Oracle DB Firewall gelesen und ausgewertet werden. Warum sollte ich TDE nutzen statt einer kompletten Festplattenverschlüsselung? TDE bietet einen weitergehenden Schutz. Denn TDE schützt auch vor Systemadministratoren, die zwar keinen Zugriff auf die Datenbank, aber auf der Betriebssystemebene Zugriff auf die Datenbankdateien haben. Ausserdem bleiben einmal verschlüsselte Daten verschlüsselt, egal wo diese hinkopiert werden. Dies ist bei einer Festplattenverschlüssung nicht der Fall. Welche Verschlüsselungsalgorithmen stehen zur Verfügung? AES (256-, 192-, 128-bit key) 3DES (3-key)

    Read the article

  • HTML5 data-* (custom data attribute)

    - by Renso
    Goal: Store custom data with the data attribute on any DOM element and retrieve it. Previously under HTML4 we used to use classes to store custom data, something to the affect of <input class="account void limit-5000 over-4999" /> and then have to parse the data out of the class In a book published by Peter-Paul Koch in 2007, ppk on JavaScript, he explains why and how to use custom attributes to make data more accessible to JavaScript, using name-value pairs. Accessing a custom attribute account-limit=5000 is much easier and more intuitive than trying to parse it out of a class, Plus, what if the class name for example "color-5" has a representative class definition in a CSS stylesheet that hides it away or worse some JavaScript plugin that automatically adds 5000 to it, or something crazy like that, just because it is a valid class name. As you can see there are quite a few reasons why using classes is a bad design and why it was important to define custom data attributes in HTML5. Syntax: You define the data attribute by simply prefixing any data item you want to store with any HTML element with "data-". For example to store our customers account data with a hidden input element: <input type="hidden" data-account="void" data-limit=5000 data-over=4999  /> How to access the data: account  -     element.dataset.account limit    -     element.dataset.limit You can also access it by using the more traditional get/setAttribute method or if using jQuery $('#element').attr('data-account','void') Browser support: All except for IE. There is an IE hack around this at http://gist.github.com/362081. Special Note: Be AWARE, do not use upper-case when defining your data elements as it is all converted to lower-case when reading it, so: data-myAccount="A1234" will not be found when you read it with: element.dataset.myAccount Use only lowercase when reading so this will work: element.dataset.myaccount

    Read the article

  • Centrino Wireless-N 1000 takes forever to connect and keeps asking for password

    - by waclock
    A few days ago I started having this problem. When I tried to connect to any WiFi Connection it would stay connecting forever, and after a minute or so it would ask me for the password again. The strange thing is that this happened out of nowhere, I did not install any new drivers or anything like that. After this happened I decided to uninstall ubuntu and install it again ("inside windows") but the problem is still there. Any suggestions would be greatly appreciated. 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 2c:27:d7:aa:e4:7d size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl8168e-3_0.0.4 03/27/12 latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:50 ioport:4000(size=256) memory:c0404000-c0404fff memory:c0400000-c0403fff *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:0d:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:09:9c:58 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-23-generic-pae firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:52 memory:c4500000-c4501fff *-network description: Ethernet interface physical id: 1 bus info: usb@2:1.2 logical name: eth1 serial: ee:85:2f:7d:80:96 capabilities: ethernet physical configuration: broadcast=yes driver=ipheth ip=172.20.10.2 link=yes multicast=yes

    Read the article

  • MapReduce

    - by kaleidoscope
    MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of  intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data,  scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Example: A process to count the appearances of each different word in a set of documents void map(String name, String document):   // name: document name   // document: document contents   for each word w in document:     EmitIntermediate(w, 1); void reduce(String word, Iterator partialCounts):   // word: a word   // partialCounts: a list of aggregated partial counts   int result = 0;   for each pc in partialCounts:     result += ParseInt(pc);   Emit(result); Here, each document is split in words, and each word is counted initially with a "1" value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.   Sarang, K

    Read the article

  • 10.10 boots to command line login prompt

    - by greggory.hz
    I recently installed Ubuntu 10.10 on a computer that was previously running 10.04 (that worked fine). Now, each time I boot up, it starts up in a command line login prompt. I can login and it stays at the command line (as expected). I can then manually start gdm with sudo start gdm and it works fine. I can also enable compiz (using proprietary nvidia drivers) so I'm reasonably confident that it's not a driver problem (at least not in the sense that the drivers just flat out aren't working). Interestingly, if I leave it at the command prompt without logging in, after about 5 or 10 minutes, gnome starts up on its own. I'm not sure what is causing this. This is what dmesg | tail gives me after a manual start of gdm: [ 15.664166] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 270.18 Tue Jan 18 21:46:26 PST 2011 [ 15.991304] type=1400 audit(1297543976.953:11): apparmor="STATUS" operation="profile_load" name="/usr/share/gdm/guest-session/Xsession" pid=990 comm="apparmor_parser" [ 16.606986] eth0: link up, 100Mbps, full-duplex, lpa 0xCDE1 [ 18.798506] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 26.740010] eth0: no IPv6 routers present [ 90.444593] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 189.252208] audit_printk_skb: 21 callbacks suppressed [ 189.252213] type=1400 audit(1297544150.218:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1876 comm="apparmor_parser" [ 189.252584] type=1400 audit(1297544150.218:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1876 comm="apparmor_parser" [ 351.159585] lo: Disabled Privacy Extensions

    Read the article

  • Design guideline for saving big byte stream in c# [migrated]

    - by Praveen
    I have an application where I am receiving big byte array very fast around per 50 miliseconds. The byte array contains some information like file name etc. The data (byte array ) may come from several sources. Each time I receive the data, I have to find the file name and save the data to that file name. I need some guide lines to how should I design it so that it works efficient. Following is my code... public class DataSaver { private static Dictionary<string, FileStream> _dictFileStream; public static void SaveData(byte[] byteArray) { string fileName = GetFileNameFromArray(byteArray); FileStream fs = GetFileStream(fileName); fs.Write(byteArray, 0, byteArray.Length); } private static FileStream GetFileStream(string fileName) { FileStream fs; bool hasStream = _dictFileStream.TryGetValue(fileName, out fs); if (!hasStream) { fs = new FileStream(fileName, FileMode.Append); _dictFileStream.Add(fileName, fs); } return fs; } public static void CloseSaver() { foreach (var key in _dictFileStream.Keys) { _dictFileStream[key].Close(); } } } How can I improve this code ? I need to create a thread maybe to do the saving.

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • Bash script runs fine, but not in cron

    - by radiotech
    I have a script that's supposed to record a shoutcast stream for an hour, convert it to mp3, and then save it. The script runs correctly when I run it from the terminal, but I can't seem to get it to run in cron (where it should run every hour at the top of the hour). Here's the line in crontab: 0 * * * * /medialib/tech/bin/recordstream 2>&1 >> /medialib/tech/cron.log and here's the script: #!/bin/bash name="$(date +%s)" mp3_name=$name.mp3 wav_name=$name.wav timeout -sHUP 60m vlc -I dummy --sout "#transcode{channels=2}:std{access=file,mux=wav,dst=/medialib/stream_backup/wav/$wav_name" /medialib/tech/lib/listen.m3u lame --mp3input /medialib/stream_backup/wav/$wav_name /medialib/stream_backup/$mp3_name rm /medialib/stream_backup/wav/$wav_name Thank you! EDIT: Contents of cron.log (This text has been in the log file since it was transferred from an old server where it was working). VLC media player 2.0.8 Twoflower Command Line Interface initialized. Type `help' for help. > Shutting down. VLC media player 2.0.8 Twoflower Command Line Interface initialized. Type `help' for help. > Shutting down.

    Read the article

  • RDA and Fusion Middleware Diagnostic Framework Integration

    - by Daniel Mortimer
    Further to my last blog entry "FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!" I have spent some time exploring how Remote Diagnostic Agent (RDA) integrates with Diagnostic Framework. Remote Diagnostic Agent, by default, collects the information about the last 10 incidents which have been captured in the Automatic Diagnostic Repository (ADR). This information can be located via RDA's Start Page Menu system. See screenshot below. Screenshot - Viewing Diagnostic Framework Incident Information in a RDA Package Note: In the next release of RDA - version 4.30 - the Diagnostic Repository menu label will have it's own position in the weblogic managed server sub menu hierarchy rather than be a child menu item of the logs menuDiagnostic Framework is also capable of launching RDA engine and including the output in an incident package. This is achieved via the command "IPS GENERATE PACKAGE". The RDA output is written to DOMAIN_HOME/servers/<server name>/adr/diag/ofm/<domain name>/<server name>/incpkg/pkg_[number]/seq_[number]/rda The RDA collected files are best viewed (as shown in the screenshot above) by opening the RDA Start Page - "DFW__start.htm" - from this directory in a browser. If you do not have a browser available on the host machine, zip the contents of the rda directory and transfer and extract to a machine which has a browser.  The explanation of the integration goes a little deeper. If you want to know more, read My Oracle Support document: Understanding RDA and FMW 11g Diagnostic Framework Integration [Document 1503644.1]

    Read the article

  • Why does DataContractJsonSerializer not include generic like JavaScriptSerializer?

    - by Patrick Magee
    So the JavaScriptSerializer was deprecated in favor of the DataContractJsonSerializer. var client = new WebClient(); var json = await client.DownloadStringTaskAsync(url); // http://example.com/api/people/1 // Deprecated, but clean looking and generally fits in nicely with // other code in my app domain that makes use of generics var serializer = new JavaScriptSerializer(); Person p = serializer.Deserialize<Person>(json); // Now have to make use of ugly typeof to get the Type when I // already know the Type at compile type. Why no Generic type T? var serializer = new DataContractJsonSerializer(typeof(Person)); Person p = serializer.ReadObject(json) as Person; The JavaScriptSerializer is nice and allows you to deserialize using a type of T generic in the function name. Understandably, it's been deprecated for good reason, with the DataContractJsonSerializer, you can decorate your Type to be deserialized with various things so it isn't so brittle like the JavaScriptSerializer, for example [DataMember(name = "personName")] public string Name { get; set; } Is there a particular reason why they decided to only allow users to pass in the Type? Type type = typeof(Person); var serializer = new DataContractJsonSerializer(type); Person p = serializer.ReadObject(json) as Person; Why not this? var serializer = new DataContractJsonSerializer(); Person p = serializer.ReadObject<Person>(json); They can still use reflection with the DataContract decorated attributes based on the T that I've specified on the .ReadObject<T>(json)

    Read the article

  • Collection RemoveAll Extension Method

    - by João Angelo
    I had previously posted a RemoveAll extension method for the Dictionary<K,V> class, now it’s time to have one for the Collection<T> class. The signature is the same as in the corresponding method already available in List<T> and the implementation relies on the RemoveAt method to perform the actual removal of each element. Finally, here’s the code: public static class CollectionExtensions { /// <summary> /// Removes from the target collection all elements that match the specified predicate. /// </summary> /// <typeparam name="T">The type of elements in the target collection.</typeparam> /// <param name="collection">The target collection.</param> /// <param name="match">The predicate used to match elements.</param> /// <exception cref="ArgumentNullException"> /// The target collection is a null reference. /// <br />-or-<br /> /// The match predicate is a null reference. /// </exception> /// <returns>Returns the number of elements removed.</returns> public static int RemoveAll<T>(this Collection<T> collection, Predicate<T> match) { if (collection == null) throw new ArgumentNullException("collection"); if (match == null) throw new ArgumentNullException("match"); int count = 0; for (int i = collection.Count - 1; i >= 0; i--) { if (match(collection[i])) { collection.RemoveAt(i); count++; } } return count; } }

    Read the article

  • Usage of repository between EF model and code consumer

    - by jim
    I have binary data in my database that I'll have to convert to bitmap at some point. I was thinking whether or not it's appropriate to use a repository and do it there. My consumer, which is a presentation layer, will use this repository. For example: // This is a class I created for modeling the item as is. public class RealItem { public string Name { get; set; } public Bitmap Image { get; set; } } public abstract class BaseRepository { //using Unity (http://unity.codeplex.com) to inject the dependancy of entity context. [Dependency] public Context { get; set; } } public calss ItemRepository : BaseRepository { public List<Items> Select() { IEnumerable<Items> items = from item in Context.Items select item; List<RealItem> lst = new List<RealItem>(); foreach(itm in items) { MemoryStream stream = new MemoryStream(itm.Image); Bitmap image = (Bitmap)Image.FromStream(stream); RealItem ritem = new RealItem{ Name=item.Name, Image=image }; lst.Add(ritem); } return lst; } } Is this a correct way to use the repository pattern? I'm learning this pattern and I've seen a lot of examples online that are using a repository but when I looked at their source code... for example: public IQueryable<object> Select { return from q in base.Context select q; } as you can see no behavior is added to the system by their approach, so I was confused that maybe repository is something else and I got it all wrong. At the end there should be extra benifits of using them right?

    Read the article

  • In an Entity/Component system, can component data be implemented as a simple array of key-value pairs? [on hold]

    - by 010110110101
    I'm trying to wrap my head around how to organize components in an Entity Component Systems once everything in the current scene/level is loaded in memory. (I'm a hobbyist BTW) Some people seem to implement the Entity as an object that contains a list of of "Component" objects. Components contain data organized as an array of key-value pairs. Where the value is serialized "somehow". (pseudocode is loosely in C# for brevity) class Entity { Guid _id; List<Component> _components; } class Component { List<ComponentAttributeValue> _attributes; } class ComponentAttributeValue { string AttributeName; object AttributeValue; } Others describe Components as an in-memory "table". An entity acquires the component by having its key placed in a table. The attributes of the component-entity instance are like the columns in a table class Renderable_Component { List<RenderableComponentAttributeValue> _entities; } class RenderableComponentAttributeValue { Guid entityId; matrix4 transformation; // other stuff for rendering // everything is strongly typed } Others describe this actually as a table. (and such tables sound like an EAV database schema BTW) (and the value is serialized "somehow") Render_Component_Table ---------------- Entity Id Attribute Name Attribute Value and when brought into running code: class Entity { Guid _id; Dictionary<string, object> _attributes; } My specific question is: Given various components, (Renderable, Positionable, Explodeable, Hideable, etc) and given that each component has an attribute with a particular name, (TRANSLATION_MATRIX, PARTICLE_EMISSION_VELOCITY, CAN_HIDE, FAVORITE_COLOR, etc) should: an entity contain a list of components where each component, in turn, has their own array of named attributes with values serialized somehow or should components exist as in-memory tables of entity references and associated with each "row" there are "columns" representing the attribute with values that are specific to each entity instance and are strongly typed or all attributes be stored in an entity as a singular array of named attributes with values serialized somehow (could have name collisions) or something else???

    Read the article

  • What exactly do I have to pay attention for when choosing Windows Hosting Provider?

    - by user850010
    This is my first time choosing a hosting company. It is for a web site made in asp.net mvc3. So I was thinking choosing a provider would be easy since I found this page http://www.microsoft.com/web/Hosting/Home which contains hosting offers. Now hours later, I am still searching. The reason is that as soon as I start investigating about particular company, something stands out that I do not like. Here are some examples what I noticed when checking various companies in more detail: Company "about us" page is lacking in information about their company. Few of them had just general description what they do and nothing else, while some others had information like company name but had no address. Checking company name in Business Registry Searches gave no results. Two of the companies I checked had both company name and address but I was unable to find them in the registry. Putting company domain into Google gave mostly results from that domain or web hosting review sites but not much else. I am assuming that good companies should have search results from other sites too. Low Alexa Traffic Rank. There was one company which had a site that looked very professional but their alexa traffic ranking was like 2 million. Are there any other factors I should pay attention to when choosing a hosting company? Do I have legitimate concerns or am I just too paranoid?

    Read the article

  • XF86 keybinds in Openbox

    - by vasa1
    Lubuntu uses Openbox as its window manager. ~/.config/openbox/lubuntu-rc.xml is a file that specifies, among other things, keybinds for various commands. Most of the keybinds in lubuntu-rc.xml use modifier keys such as Control, Shift, Alt, and Super. For example, one way of opening a terminal window would be by pressing Control+Alt+T together: <!-- Launch a terminal on Ctrl + Alt + T--> <keybind key="C-A-T"> <action name="Execute"> <command>lxsession-default terminal</command> </action> </keybind> But there is also this: <!-- Keybinding for terminal button--> <keybind key="XF86WWW"> <action name="Execute"> <command>lxsession-default terminal</command> </action> </keybind> <keybind key="XF86Terminal"> <action name="Execute"> <command>lxsession-default terminal</command> </action> </keybind> What are keybind key="XF86WWW" and keybind key="XF86Terminal"? How do I locate these keys on my laptop's keyboard? My laptop is a Dell Inspiron N 1545 from 2008.

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • Database Backup History From MSDB in a pivot table

    - by steveh99999
    I knocked up a nice little query to display backup history for each database in a pivot table format.I wanted to display the most recent full, differential, and transaction log backup for each database. Here's the SQL :-WITH backupCTE AS (SELECT name, recovery_model_desc, d AS 'Last Full Backup', i AS 'Last Differential Backup', l AS 'Last Tlog Backup' FROM ( SELECT db.name, db.recovery_model_desc,type, backup_finish_date FROM master.sys.databases db LEFT OUTER JOIN msdb.dbo.backupset a ON a.database_name = db.name WHERE db.state_desc = 'ONLINE' ) AS Sourcetable   PIVOT (MAX (backup_finish_date) FOR type IN (D,I,L) ) AS MostRecentBackup ) SELECT * FROM backupCTE Gives output such as this :-  With this query, I can then build up some straightforward queries to ensure backups are scheduled and running as expected -For example, the following logic can be used ;-  - WHERE [Last Full Backup] IS NULL) - ie database has never been backed up.. - WHERE [Last Tlog Backup] < DATEDIFF(mm,GETDATE(),-60) AND recovery_model_desc <> 'SIMPLE') - transction log not backed up in last 60 minutes. - WHERE [Last Full Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Differential Backup] < [Last Full Backup]) -- no backup in last day.- WHERE [Last Differential Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Full Backup] < DATEDIFF(dd,GETDATE(),-8) ) -- no differential backup in last day when last full backup is over 8 days old.   

    Read the article

  • Kernel panic on reboot after failed logical volume resize

    - by Derek
    I attempted to do a logical volume resize yesterday using the follwoing commands $sudo pvdisplay "/dev/sda8" is a new physical volume of "113.11 GiB" --- NEW Physical volume --- PV Name /dev/sda8 VG Name PV Size 113.11 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID jwyO1o-b2ap-CW51-kx7O-kf26-arim-SM8V6m $sudo vgextend vg /dev/sda8 sudo vgdisplay vg --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 5 Open LV 5 Max PV 0 Cur PV 2 Act PV 2 VG Size 131.74 GiB PE Size 4.00 MiB Total PE 33725 Alloc PE / Size 4769 / 18.63 GiB Free PE / Size 28956 / 113.11 GiB VG UUID AhusW2-pzFv-3W32-mpv2-s5VG-FN7S-kVSadx $sudo lvresize -L +20GB /dev/mapper/vg-var So as you can see, it looks like adding the physical volume to the vg worked, because i see free space available there. When I typed the lvresize command, it never returned. I let this run overnight in the background, but this morning I still couldnt successfully do a "pvdisplay" or "lvdisplay" because I think it was waiting on a lock or something, so the command never returned. When i went to log onto the server's console, I saw a bunch of messages like: rcu_sched_state detected stall on cpu 2 Now when I boot, I get a kernel panic error, and a message about not being able to mount /mapper/vg-root cannot open root device "mapper/vg-root" or unknown_block(0,0) Kernel Panic -not syncing: VFS: Unable to mount root file system on unknown_block(0,0) What should I do to get my system back up and running? Did I attempt to do the logical volume resize correctly? Thanks

    Read the article

  • Killing a Plesk 11.5 backup process in Ubuntu

    - by Klaaz
    I want to kill a backup process initiated by Plesk in Ubuntu but don't know which processes safely can be killed: ps aux | grep backup root 20505 0.0 0.0 4392 604 ? Ss 01:43 0:00 /bin/sh -c [ -x /opt/psa/admin/sbin/backupmng ] && /opt/psa/admin/sbin/backupmng >/dev/null 2>&1 psaadm 20510 0.0 0.0 30884 1816 ? S 01:43 0:00 /opt/psa/admin/sbin/backupmng psaadm 20511 0.0 0.0 30884 644 ? S 01:43 0:01 /opt/psa/admin/sbin/backupmng psaadm 20512 0.0 0.6 270472 49356 ? S 01:43 0:03 /usr/bin/sw-engine -c /opt/psa/admin/conf/php.ini /opt/psa/admin/plib/backup/scheduled_backup.php --dump 1 root 20517 0.0 14.9 1400124 1214696 ? SN 01:43 0:27 /usr/bin/perl /opt/psa/admin/bin/plesk_agent_manager server --owner-uid=0bd9576c-f832-4362-b4f4-3c1afac22be2 --owner-type=server --dump-rotation=7 --backup-profile-name=serverXL_ --session-path=/opt/psa/PMM/sessions/2013-10-23-014303.864 --output-file=ftp://[email protected]//backup/serverXL/ --ftp-passive-mode root 27423 0.0 0.0 13652 888 pts/2 S+ 10:35 0:00 grep --color=auto backup root 29103 1.0 14.8 1400124 1209760 ? SN 02:16 5:21 /usr/bin/perl /opt/psa/admin/bin/plesk_agent_manager server --owner-uid=0bd9576c-f832-4362-b4f4-3c1afac22be2 --owner-type=server --dump-rotation=7 --backup-profile-name=serverXL_ --session-path=/opt/psa/PMM/sessions/2013-10-23-014303.864 --output-file=ftp://[email protected]//backup/serverXL/ --ftp-passive-mode root 29106 0.0 14.8 1400404 1212456 ? DN 02:16 0:07 /usr/bin/perl /opt/psa/admin/bin/plesk_agent_manager server --owner-uid=0bd9576c-f832-4362-b4f4-3c1afac22be2 --owner-type=server --dump-rotation=7 --backup-profile-name=serverXL_ --session-path=/opt/psa/PMM/sessions/2013-10-23-014303.864 --output-file=ftp://[email protected]//backup/serverXL/ --ftp-passive-mode It seems the FTP process is the culprit?

    Read the article

  • SQL SERVER – Fix Visual Studio Error : Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL

    - by pinaldave
    In one of the virtual environment while I was trying to add SQL Server Database (.mdf) file to asp.net project I encountered following error: Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL:  For a long time I am using SQL Server 2012 but this error was a bit interesting to me. I realize that there should not be any need of the SQL Server 2005 installation. I quickly figured out that I can remove this error if I do as mentioned below: Open Microsoft Visual Studio Select Tools >> Options >> Database Tools >> Data Connections Enter the name of an installed instance in “SQL Server Instance Name” field. Click OK If you do not know the instance name, you can follow either of the options. 1) Use the command line sqlcmd utility 2) Using SQL Server Management Studio Is there any other way to resolve this error? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd, Visual Studio

    Read the article

  • Using LINQ to Twitter OAuth with Windows 8

    - by Joe Mayo
    In previous posts, I explained how to use LINQ to Twitter with Windows 8, but the example was a Twitter Search, which didn’t require authentication. Much of the Twitter API requires authentication, so this post will explain how you can perform OAuth authentication with LINQ to Twitter in a Windows 8 Metro-style application. Getting Started I have earlier posts on how to create a Windows 8 app and add pages, so I’ll assume it isn’t necessary to repeat here. One difference is that I’m using Visual Studio 2012 RC and some of the terminology and/or library code might be slightly different.  Here are steps to get started: Create a new Windows metro style app, selecting the Blank App project template. Create a new Basic Page and name it OAuth.xaml.  Note: You’ll receive a prompt window for adding files and you should click Yes because those files are necessary for this demo. Add a new Basic Page named TweetPage.xaml. Open App.xaml.cs and change !rootFrame.Navigate(typeof(MainPage)) to !rootFrame.Navigate(typeof(TweetPage)). Now that the project is set up you’ll see the reason why authentication is required by setting up the TweetPage. Setting Up to Tweet a Status In this section, I’ll show you how to set up the XAML and code-behind for a tweet.  The tweet logic will check to see if the user is authenticated before performing the tweet. To tweet, I put a TextBox and Button on the XAML page. The following code omits most of the page, concentrating primarily on the elements of interest in this post: <StackPanel Grid.Row="1"> <TextBox Name="TweetTextBox" Margin="15" /> <Button Name="TweetButton" Content="Tweet" Click="TweetButton_Click" Margin="15,0" /> </StackPanel> Given the UI above, the user types the message they want to tweet, and taps Tweet. This invokes TweetButton_Click, which checks to see if the user is authenticated.  If the user is not authenticated, the app navigates to the OAuth page.  If they are authenticated, LINQ to Twitter does an UpdateStatus to post the user’s tweet.  Here’s the TweetButton_Click implementation: void TweetButton_Click(object sender, RoutedEventArgs e) { PinAuthorizer auth = null; if (SuspensionManager.SessionState.ContainsKey("Authorizer")) { auth = SuspensionManager.SessionState["Authorizer"] as PinAuthorizer; } if (auth == null || !auth.IsAuthorized) { Frame.Navigate(typeof(OAuthPage)); return; } var twitterCtx = new TwitterContext(auth); Status tweet = twitterCtx.UpdateStatus(TweetTextBox.Text); new MessageDialog(tweet.Text, "Successful Tweet").ShowAsync(); } For authentication, this app uses PinAuthorizer, one of several authorizers available in the LINQ to Twitter library. I’ll explain how PinAuthorizer works in the next section. What’s important here is that LINQ to Twitter needs an authorizer to post a Tweet. The code above checks to see if a valid authorizer is available. To do this, it uses the SuspensionManager class, which is part of the code generated earlier when creating OAuthPage.xaml. The SessionState property is a Dictionary<string, object> and I’m using the Authorizer key to store the PinAuthorizer.  If the user previously authorized during this session, the code reads the PinAuthorizer instance from SessionState and assigns it to the auth variable. If the user is authorized, auth would not be null and IsAuthorized would be true. Otherwise, the app navigates the user to OAuthPage.xaml, which I’ll discuss in more depth in the next section. When the user is authorized, the code passes the authorizer, auth, to the TwitterContext constructor. LINQ to Twitter uses the auth instance to build OAuth signatures for each interaction with Twitter.  You no longer need to write any more code to make this happen. The code above accepts the tweet just posted in the Status instance, tweet, and displays a message with the text to confirm success to the user. You can pull the PinAuthorizer instance from SessionState, instantiate your TwitterContext, and use it as you need. Just remember to make sure you have a valid authorizer, like the code above. As shown earlier, the code navigates to OAuthPage.xaml when a valid authorizer isn’t available. The next section shows how to perform the authorization upon arrival at OAuthPage.xaml. Doing the OAuth Dance This section shows how to authenticate with LINQ to Twitter’s built-in OAuth support. From the user perspective, they must be navigated to the Twitter authentication page, add credentials, be navigated to a Pin number page, and then enter that Pin in the Windows 8 application. The following XAML shows the relevant elements that the user will interact with during this process. <StackPanel Grid.Row="2"> <WebView x:Name="OAuthWebBrowser" HorizontalAlignment="Left" Height="400" Margin="15" VerticalAlignment="Top" Width="700" /> <TextBlock Text="Please perform OAuth process (above), enter Pin (below) when ready, and tap Authenticate:" Margin="15,15,15,5" /> <TextBox Name="PinTextBox" Margin="15,0,15,15" Width="432" HorizontalAlignment="Left" IsEnabled="False" /> <Button Name="AuthenticatePinButton" Content="Authenticate" Margin="15" IsEnabled="False" Click="AuthenticatePinButton_Click" /> </StackPanel> The WebView in the code above is what allows the user to see the Twitter authentication page. The TextBox is for entering the Pin, and the Button invokes code that will take the Pin and allow LINQ to Twitter to complete the authentication process. As you can see, there are several steps to OAuth authentication, but LINQ to Twitter tries to minimize the amount of code you have to write. The two important parts of the code to make this happen are the part that starts the authentication process and the part that completes the authentication process. The following code, from OAuthPage.xaml.cs, shows a couple events that are instrumental in making this process happen: public OAuthPage() { this.InitializeComponent(); this.Loaded += OAuthPage_Loaded; OAuthWebBrowser.LoadCompleted += OAuthWebBrowser_LoadCompleted; } The OAuthWebBrowser_LoadCompleted event handler enables UI controls when the browser is done loading – notice that the TextBox and Button in the previous XAML have their IsEnabled attributes set to False. When the Page.Loaded event is invoked, the OAuthPage_Loaded handler starts the OAuth process, shown here: void OAuthPage_Loaded(object sender, RoutedEventArgs e) { auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => OAuthWebBrowser.Navigate(new Uri(pageLink, UriKind.Absolute))) }; auth.BeginAuthorize(resp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (resp.Status) { case TwitterErrorStatus.Success: break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(resp.Error.ToString(), resp.Message).ShowAsync(); break; } })); } The PinAuthorizer, auth, a field of this class instantiated in the code above, assigns keys to the Credentials property. These are credentials that come from registering an application with Twitter, explained in the LINQ to Twitter documentation, Securing Your Applications. Notice how I use Dispatcher.RunAsync to marshal the web browser navigation back onto the UI thread. Internally, LINQ to Twitter invokes the lambda expression assigned to GoToTwitterAuthorization when starting the OAuth process.  In this case, we want the WebView control to navigate to the Twitter authentication page, which is defined with a default URL in LINQ to Twitter and passed to the GoToTwitterAuthorization lambda as pageLink. Then you need to start the authorization process by calling BeginAuthorize. This starts the OAuth dance, running asynchronously.  LINQ to Twitter invokes the callback assigned to the BeginAuthorize parameter, allowing you to take whatever action you need, based on the Status of the response, resp. As mentioned earlier, this is where the user performs the authentication process, enters the Pin, and clicks authenticate. The handler for authenticate completes the process and saves the authorizer for subsequent use by the application, as shown below: void AuthenticatePinButton_Click(object sender, RoutedEventArgs e) { auth.CompleteAuthorize( PinTextBox.Text, completeResp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (completeResp.Status) { case TwitterErrorStatus.Success: SuspensionManager.SessionState["Authorizer"] = auth; Frame.Navigate(typeof(TweetPage)); break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(completeResp.Error.ToString(), completeResp.Message).ShowAsync(); break; } })); } The PinAuthorizer CompleteAuthorize method takes two parameters: Pin and callback. The Pin is from what the user entered in the TextBox prior to clicking the Authenticate button that invoked this method. The callback handles the response from completing the OAuth process. The completeResp holds information about the results of the operation, indicated by a Status property of type TwitterErrorStatus. On success, the code assigns auth to SessionState. You might remember SessionState from the previous description of TweetPage – this is where the valid authorizer comes from. After saving the authorizer, the code navigates the user back to TweetPage, where they can type in a message, click the Tweet button, and observe that they have successfully tweeted. Summary You’ve seen how to get started with using LINQ to Twitter in a Metro-style application. The generated code contained a SuspensionManager class with way to manage information across multiple pages via its SessionState property. You also saw how LINQ to Twitter performs authorization in two steps of starting the process and completing the process when the user provides a Pin number. Remember to marshal callback thread back onto the UI – you saw earlier how to use Dispatcher.RunAsync to accomplish this. There were a few steps in the process, but LINQ to Twitter did minimize the amount of code you needed to write to make it happen. You can download the MetroOAuthDemo.zip sample on the LINQ to Twitter Samples Page.   @JoeMayo

    Read the article

  • NEW! Oracle Unified Business Process Management Specialization!

    - by michaela.seika(at)oracle.com
    Be one of the very first to become an Oracle Unified Business Process Management Specialist! Check out the Oracle Unified Business Process Management Knowledge Zone and go to the Specialization criteria to learn how you can become an BPM Specialized Partner. Pass the following assessment tests and exam to meet the competency criteria: · Oracle Unified Business Process Management Suite 11g Sales Specialist · Oracle Unified Business Process Management Suite 11g PreSales Specialist Assessment · Oracle Unified Business Process Management Suite 11g Essentials (1Z1-560) Exam Go to the OPN Competency Center to access the Specialist Guided Leaning Paths and Boot Camp to get the product information that can help you pass the tests: · Oracle Unified Business Process Management 11g Sales Specialist GLP · Oracle Unified Business Process Management 11g PreSales Specialist GLP · Oracle Unified Business Process Management 11g Implementation Specialist GLP · Oracle Unified Business Process Management Suite 11g Implementation Boot Camp Oracle Unified Business Process Management Suite 11g Essentials (1Z1-560) Exam is available in beta testing. Pass the exam to become an Oracle Unified Business Process Management Certified Implementation Specialist! As an incentive we are offering FREE beta exam vouchers to early-adopter Partners. As there are a limited number of free vouchers available, the requests will be processed on a first-come, first-served basis. To request a voucher send an e-mail to: [email protected] specifying the exam name, and your contact information: name, job role, and company name. Register for the OU beta exam at the nearest Pearson VUE testing center. For More Information Oracle Certification Program Beta Exams OPN Certified Specialist exams OPN Certified Specialist FAQ

    Read the article

  • How do I get an application to appear as a choice in update-alternatives?

    - by Jay
    I separately installed the Firefox Beta and Alpha channels, and have desktop configuration files pointing to them in ~/.local/share/applications. However, stable Firefox is being used as my default browser by the system. (Firefox Beta used to be used until I messed with the "Default Applications" in System Settings, where it is not listed.) I tried running sudo update-alternatives --config x-www-browser to manually change it, but it's only recognizing Chromium and Firefox (stable) and showing them as a choice. What can I do to get custom desktop configuration files in ~/.local/share/applications to be seen as default alternatives? I think I may have to fiddle with the desktop config files, or with mimeinfo.cache or mimeapps.list? Running Oneiric. Here is the content of the firefox-beta.desktop file I created: [Desktop Entry] Name=Firefox Beta Exec=firefox-beta -P Beta -no-remote Icon=firefox Terminal=false X-MultipleArgs=false Type=Application StartupNotify=true StartupWMClass=Firefox Categories=GNOME;GTK;Network;WebBrowser; Comment[en_US]=Firefox Beta Channel MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/vnd.mozilla.xul+xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;x-scheme-handler/ftp;x-scheme-handler/chrome;video/webm; Name[en_US]=Firefox Beta [NewWindow Shortcut Group] Name=Open a New Window Exec=firefox-beta -new-window about:blank TargetEnvironment=Unity

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >