Search Results

Search found 9353 results on 375 pages for 'implementation phase'.

Page 259/375 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • ASP.NET Localization: Enabling resource expressions with an external resource assembly

    - by Brian Schroer
    I have several related projects that need the same localized text, so my global resources files are in a shared assembly that’s referenced by each of those projects. It took an embarrassingly long time to figure out how to have my .resx files generate “public” properties instead of “internal” so I could have a shared resources assembly (apparently it was pretty tricky pre-VS2008, and my “googling” bogged me down some out-of-date instructions). It’s easy though – Just change the “Custom Tool” to “PublicResXFileCodeGenerator”:    …which can be done via the “Access Modifier” dropdown of the resource file designer window:   A reference to my shared resources DLL gives me the ability to use the resources in code, but by default, the ASP.NET resource expression syntax: <asp:Button ID="BeerButton" runat="server" Text="<%$ Resources:MyResources, Beer %>" />   …assumes that your resources are in your web site project.   To make resource expressions work with my shared resources assembly, I added two classes to the resources assembly: 1) a custom IResourceProvider implementation:   1: using System; 2: using System.Web.Compilation; 3: using System.Globalization; 4:   5: namespace DuffBeer 6: { 7: public class CustomResourceProvider : IResourceProvider 8: { 9: public object GetObject(string resourceKey, CultureInfo culture) 10: { 11: return MyResources.ResourceManager.GetObject(resourceKey, culture); 12: } 13:   14: public System.Resources.IResourceReader ResourceReader 15: { 16: get { throw new NotSupportedException(); } 17: } 18: } 19: }   2) and a custom factory class inheriting from the ResourceProviderFactory base class:   1: using System; 2: using System.Web.Compilation; 3:   4: namespace DuffBeer 5: { 6: public class CustomResourceProviderFactory : ResourceProviderFactory 7: { 8: public override IResourceProvider CreateGlobalResourceProvider(string classKey) 9: { 10: return new CustomResourceProvider(); 11: } 12:   13: public override IResourceProvider CreateLocalResourceProvider(string virtualPath) 14: { 15: throw new NotSupportedException(String.Format( 16: "{0} does not support local resources.", 17: this.GetType().Name)); 18: } 19: } 20: }   In the “system.web / globalization” section of my web.config file, I point the “resourceProviderFactoryType" property to my custom factory:   <system.web> <globalization culture="auto:en-US" uiCulture="auto:en-US" resourceProviderFactoryType="DuffBeer.CustomResourceProviderFactory, DuffBeer" />   This simple approach met my needs for these projects , but if you want to create reusable resource provider and factory classes that allow you to specify the assembly in the resource expression, the instructions are here.

    Read the article

  • How to handle "circular dependency" in dependency injection

    - by Roel
    The title says "Circular Dependency", but it is not the correct wording, because to me the design seems solid. However, consider the following scenario, where the blue parts are given from external partner, and orange is my own implementation. Also assume there is more then one ConcreteMain, but I want to use a specific one. (In reality, each class has some more dependencies, but I tried to simplify it here) I would like to instanciate all of this with Depency Injection (Unity), but I obviously get a StackOverflowException on the following code, because Runner tries to instantiate ConcreteMain, and ConcreteMain needs a Runner. IUnityContainer ioc = new UnityContainer(); ioc.RegisterType<IMain, ConcreteMain>() .RegisterType<IMainCallback, Runner>(); var runner = ioc.Resolve<Runner>(); How can I avouid this? Is there any way to structure this so that I can use it with DI? The scenario I'm doing now is setting everything up manually, but that puts a hard dependency on ConcreteMain in the class which instantiates it. This is what i'm trying to avoid (with Unity registrations in configuration). All source code below (very simplified example!); public class Program { public static void Main(string[] args) { IUnityContainer ioc = new UnityContainer(); ioc.RegisterType<IMain, ConcreteMain>() .RegisterType<IMainCallback, Runner>(); var runner = ioc.Resolve<Runner>(); Console.WriteLine("invoking runner..."); runner.DoSomethingAwesome(); Console.ReadLine(); } } public class Runner : IMainCallback { private readonly IMain mainServer; public Runner(IMain mainServer) { this.mainServer = mainServer; } public void DoSomethingAwesome() { Console.WriteLine("trying to do something awesome"); mainServer.DoSomething(); } public void SomethingIsDone(object something) { Console.WriteLine("hey look, something is finally done."); } } public interface IMain { void DoSomething(); } public interface IMainCallback { void SomethingIsDone(object something); } public abstract class AbstractMain : IMain { protected readonly IMainCallback callback; protected AbstractMain(IMainCallback callback) { this.callback = callback; } public abstract void DoSomething(); } public class ConcreteMain : AbstractMain { public ConcreteMain(IMainCallback callback) : base(callback){} public override void DoSomething() { Console.WriteLine("starting to do something..."); var task = Task.Factory.StartNew(() =>{ Thread.Sleep(5000);/*very long running task*/ }); task.ContinueWith(t => callback.SomethingIsDone(true)); } }

    Read the article

  • dvd drive I/O error?

    - by bobby
    i get dis error evry time i burn a cd/dvd thru my dvd drive...!! Nero Burning ROM bobby 4C85-200E-4005-0004-0000-7660-0800-35X3-0000-407M-MX37-**** (*) Windows XP 6.1 IA32 WinAspi: - NT-SPTI used Nero Version: 7.11.3. Internal Version: 7, 11, 3, (Nero Express) Recorder: Version: UL01 - HA 1 TA 1 - 7.11.3.0 Adapter driver: HA 1 Drive buffer : 2048kB Bus Type : default CD-ROM: Version: 52PP - HA 1 TA 0 - 7.11.3.0 Adapter driver: HA 1 === Scsi-Device-Map === === CDRom-Device-Map === ATAPI-CD ROM-DRIVE-52MAX F: CdRom0 HL-DT-ST DVDRAM GSA-H12N G: CdRom1 ======================= AutoRun : 1 Excluded drive IDs: WriteBufferSize: 83886080 (0) Byte BUFE : 0 Physical memory : 958MB (981560kB) Free physical memory: 309MB (317024kB) Memory in use : 67 % Uncached PFiles: 0x0 Use Inquiry : 1 Global Bus Type: default (0) Check supported media : Disabled (0) 11.6.2010 CD Image 10:43:02 AM #1 Text 0 File SCSIPTICommands.cpp, Line 450 LockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL 10:43:02 AM #2 Text 0 File Burncd.cpp, Line 3186 HL-DT-ST DVDRAM GSA-H12N Buffer underrun protection activated 10:43:02 AM #3 Text 0 File Burncd.cpp, Line 3500 Turn on Disc-At-Once, using CD-R/RW media 10:43:02 AM #4 Text 0 File DlgWaitCD.cpp, Line 307 Last possible write address on media: 359848 ( 79:59.73) Last address to be written: 318783 ( 70:52.33) 10:43:02 AM #5 Text 0 File DlgWaitCD.cpp, Line 319 Write in overburning mode: NO (enabled: CD) 10:43:02 AM #6 Text 0 File DlgWaitCD.cpp, Line 2988 Recorder: HL-DT-ST DVDRAM G SA-H12N; CDR co de: 00 97 27 18; O SJ entry from: Pla smon Data systems Ltd. ATIP Data: Special Info [hex] 1: D0 00 A0, 2: 61 1B 12 (LI 97:27.18), 3: 4F 3B 4A ( LO 79:59.74) Additional Info [hex] 1: 00 00 00 (invalid), 2: 00 00 00 (invalid), 3: 00 0 0 00 (invalid) 10:43:02 AM #7 Text 0 File DlgWaitCD.cpp, Line 493 Protocol of DlgWaitCD activities: TRM_DATA_MODE1, 2048, config 0, wanted index0 0 blocks, length 318784 blo cks [G: HL-DT-ST DVDRAM GSA-H12N] -------------------------------------------------------------- 10:43:02 AM #9 Text 0 File ThreadedTransferInterface.cpp, Line 986 Prepare [G: HL-DT-ST DVDRAM GSA-H12N] for write in CUE-sheet-DAO DAO infos: ========== MCN: "" TOCType: 0x00; Se ssion Clo sed, disc fixated Tracks 1 to 1: Idx 0 Idx 1 Next T rk 1: TRM_DATA_MODE1, 2048/0x00, FilePos 0 307200 6531768 32, ISRC "" DAO layout: =========== ___Start_|____Track_|_Idx_|_CtrlAdr_|_____Size_|______NWA_|_RecDep__________ -150 | lead-in | 0 | 0x41 | 0 | 0 | 0x00 -150 | 1 | 0 | 0x41 | 0 | 0 | 0x00 0 | 1 | 1 | 0x41 | 318784 | 318784 | 0x00 318784 | lead-out | 1 | 0x41 | 0 | 0 | 0x00 10:43:02 AM #10 Text 0 File SCSIPTICommands.cpp, Line 240 SPTILockVolume - completed successfully for FSCTL_LOCK_VOLUME 10:43:02 AM #11 Text 0 File Burncd.cpp, Line 4286 Caching options: cache CDRom or Network-Yes, small files-Yes ( ON 10:43:03 AM #19 Text 0 File MMC.cpp, Line 18034 CueData, Len=32 41 00 00 14 00 00 00 00 41 01 00 10 00 00 00 00 41 01 01 10 00 00 02 00 41 aa 01 14 00 46 34 22 10:43:03 AM #20 Text 0 File ThreadedTransfer.cpp, Line 268 Pipe memory size 83836800 10:43:16 AM #21 Text 0 File Cdrdrv.cpp, Line 1405 10:43:16.806 - G: HL-DT-ST DVDRAM GSA-H12N : Queue again later 10:43:42 AM #22 SPTI -1502 File SCSIPassThrough.cpp, Line 181 CdRom1: SCSIStatus(x02) WinError(0) NeroError(-1502) Sense Key: 0x04 (KEY_HARDWARE_ERROR) Nero Report 2 Nero Burning ROM Sense Code: 0x08 Sense Qual: 0x03 CDB Data: 0x2A 00 00 00 4D 00 00 00 20 00 00 00 Sense Area: 0x70 00 04 00 00 00 00 10 53 29 A1 80 08 03 Buffer x0c7d9a40: Len x10000 0xDC 87 EB 41 6E AC 61 5A 07 B2 DB 78 B5 D4 D9 24 0x8D BC 51 38 46 56 0F EE 16 15 5C 5B E3 B0 10 16 0x14 B1 C3 6E 30 2B C4 78 15 AB D5 92 09 B7 81 23 10:43:42 AM #23 CDR -1502 File Writer.cpp, Line 306 DMA-driver error, CRC error G: HL-DT-ST DVDRAM GSA-H12N 10:43:55 AM #24 Phase 38 File dlgbrnst.cpp, Line 1767 Burn process failed at 48x (7,200 KB/s) 10:43:55 AM #25 Text 0 File SCSIPTICommands.cpp, Line 287 SPTIDismountVolume - completed successfully for FSCTL_DISMOUNT_VOLUME 10:44:01 AM #26 Text 0 File Cdrdrv.cpp, Line 11412 DriveLocker: UnLockVolume completed 10:44:01 AM #27 Text 0 File SCSIPTICommands.cpp, Line 450 UnLockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL Existing drivers: Registry Keys: HKLM\Software\Microsoft\Windows NT\CurrentVersion\WinLogon Nero Report 3

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • Oracle Coherence 3.5 PreSales Boot Camp - Live Virtual Training (12-13/Mai/10)

    - by Claudia Costa
    Oracle Coherence is an in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. By automatically and dynamically partitioning data, Oracle Coherence ensures continuous data availability and transactional integrity, even in the event of a server failure. It provides organizations with a robust scale-out data abstraction layer that brokers the supply and demand of data between applications and data sources. Register today!   What will we cover The Oracle Coherence 3.5 Boot Camp is a FREE workshop which provides a quick hands-on technical ramp up on Oracle Coherence Data Grid.The Boot Camp provides a product overview, positioning and demo, discussion of customer uses and hands on lab work. Participants will leave the Oracle Coherence Boot Camp with a solid understanding of the product and where and how to apply it.It will provide an overview of the product as well as hands-on lab work.   • Architecting applications for scalability, availability, and performance • Introduction to Data Grids and Extreme Transaction Processing (XTP) • Coherence Case Studies • Oracle Coherence Product Demonstration • Coherence implementation strategies and architectural approaches • Coherence product features and APIs • Hands-on labs that will include product installation, configuration, sample applications, code examples, and more.   Who should attend This boot camp is intended for prospective users and implementers of Oracle Coherence Data Grid or implementers that have had limited exposure to Coherence and seek to gain a Technical Overview of the product with hand on exercises. Ideal participants are Oracle partners (SIs and resellers) with backgrounds in business information systems and a clientele of customers with ongoing or prospective application and/or data grid initiatives. Alternatively, partners with the background described above and an interest in evolving their practice to a similar profile are suitable participants.   Prerequisites and Workstation requirements There are no prerequisite classes for this Boot Camp. However, labs rely upon usage of the Java programming language. Therefore participants should be familiar with the Java language or similar Object Oriented programming languages. Additionally, students experience with enterprise data storage and manipulation is and knowledge of relevant concepts will benefit most from the boot camp. In order to attend this boot camp you need to have the necessary software installed on your laptop prior to attending the class. Please revise the Workstation Requirements page and register today! Agenda - May 12 - 8:30 May 13 - 12:30 *To view the full agenda and to register click here.    

    Read the article

  • Integrating Coherence & Java EE 6 Applications using ActiveCache

    - by Ricardo Ferreira
    OK, so you are a developer and are starting a new Java EE 6 application using the most wonderful features of the Java EE platform like Enterprise JavaBeans, JavaServer Faces, CDI, JPA e another cool stuff technologies. And your architecture need to hold piece of data into distributed caches to improve application's performance, scalability and reliability? If this is your current facing scenario, maybe you should look closely in the solutions provided by Oracle WebLogic Server. Oracle had integrated WebLogic Server and its champion data caching technology called Oracle Coherence. This seamless integration between this two products provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency, since Oracle provides an DI ("Dependency Injection") mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache. In this article, I will show you how to configure ActiveCache in WebLogic and at your Java EE application. Configuring WebLogic to manage Coherence Before you start changing your application to use Coherence, you need to configure your Coherence distributed cache. The good news is, you can manage all this stuff without writing a single line of code of XML or even Java. This configuration can be done entirely in the WebLogic administration console. The first thing to do is the setup of a Coherence cluster. A Coherence cluster is a set of Coherence JVMs configured to form one single view of the cache. This means that you can insert or remove members of the cluster without the client application (the application that generates or consume data from the cache) knows about the changes. This concept allows your solution to scale-out without changing the application server JVMs. You can growth your application only in the data grid layer. To start the configuration, you need to configure an machine that points to the server in which you want to execute the Coherence JVMs. WebLogic Server allows you to do this very easily using the Administration Console. In this example, I will call the machine as "coherence-server". Remember that in order to the machine concept works, you need to ensure that the NodeManager are being executed in the target server that the machine points to. The NodeManager executable can be found in <WLS_HOME>/server/bin/startNodeManager.sh. The next thing to do is to configure a Coherence cluster. In the WebLogic administration console, go to Environment > Coherence Clusters and click in "New". Call this Coherence cluster of "my-coherence-cluster". Click in next. Specify a valid cluster address and port. The Coherence members will communicate with each other through this address and port. Our Coherence cluster are now configured. Now it is time to configure the Coherence members and add them to this cluster. In the WebLogic administration console, go to Environment > Coherence Servers and click in "New". In the field "Name" set to "coh-server-1". In the field "Machine", associate this Coherence server to the machine "coherence-server". In the field "Cluster", associate this Coherence server to the cluster named "my-coherence-cluster". Click in "Finish". Start the Coherence server using the "Control" tab of WebLogic administration console. This will instruct WebLogic to start a new JVM of Coherence in the target machine that should join the pre-defined Coherence cluster. Configuring your Java EE Application to Access Coherence Now lets pass to the funny part of the configuration. The first thing to do is to inform your Java EE application which Coherence cluster to join. Oracle had updated WebLogic server deployment descriptors so you will not have to change your code or the containers deployment descriptors like application.xml, ejb-jar.xml or web.xml. In this example, I will show you how to enable DI ("Dependency Injection") to a Coherence cache from a Servlet 3.0 component. In the WEB-INF/weblogic.xml deployment descriptor, put the following metadata information: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <wls:context-root>myWebApp</wls:context-root> <wls:coherence-cluster-ref> <wls:coherence-cluster-name>my-coherence-cluster</wls:coherence-cluster-name> </wls:coherence-cluster-ref> </wls:weblogic-web-app> As you can see, using the "coherence-cluster-name" tag, we are informing our Java EE application that it should join the "my-coherence-cluster" when it loads in the web container. Without this information, the application will not be able to access the predefined Coherence cluster. It will form its own Coherence cluster without any members. So never forget to put this information. Now put the coherence.jar and active-cache-1.0.jar dependencies at your WEB-INF/lib application classpath. You need to deploy this dependencies so ActiveCache can automatically take care of the Coherence cluster join phase. This dependencies can be found in the following locations: - <WLS_HOME>/common/deployable-libraries/active-cache-1.0.jar - <COHERENCE_HOME>/lib/coherence.jar Finally, you need to write down the access code to the Coherence cache at your Servlet. In the following example, we have a Servlet 3.0 component that access a Coherence cache named "transactions" and prints into the browser output the content (the ammount property) of one specific transaction. package com.oracle.coherence.demo.activecache; import java.io.IOException; import javax.annotation.Resource; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.tangosol.net.NamedCache; @WebServlet("/demo/specificTransaction") public class TransactionServletExample extends HttpServlet { @Resource(mappedName = "transactions") NamedCache transactions; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int transId = Integer.parseInt(request.getParameter("transId")); Transaction transaction = (Transaction) transactions.get(transId); response.getWriter().println("<center>" + transaction.getAmmount() + "</center>"); } } Thats it! No more configuration is necessary and you have all set to start producing and getting data to/from Coherence. As you can see in the example code, the Coherence cache are treated as a normal dependency in the Java EE container. The magic happens behind the scenes when the ActiveCache allows your application to join the defined Coherence cluster. The most interesting thing about this approach is, no matter which type of Coherence cache your are using (Distributed, Partitioned, Replicated, WAN-Remote) for the client application, it is just a simple attribute member of com.tangosol.net.NamedCache type. And its all managed by the Java EE container as an dependency. This means that if you inject the same dependency (the Coherence cache named "transactions") in another Java EE component (JSF managed-bean, Stateless EJB) the cache will be the same. Cool isn't it? Thanks to the CDI technology, we can extend the same support for non-Java EE standards components like simple POJOs. This means that you are not forced to only use Servlets, EJBs or JSF in order to inject Coherence caches. You can do the same approach for regular POJOs created for you and managed by lightweight containers like Spring or Seam.

    Read the article

  • News, Perspektiven und jede Menge Gesprächsstoff - Der Oracle Partner Day 2012

    - by A&C Redaktion
    Was für ein Tag! Unter dem Motto „Maximize your Potential“ kamen über 470 Teilnehmerinnen und Teilnehmer beim Oracle Partner Day 2012 zusammen. Hier drehte sich alles um unsere Partner, die, wie Silvia Kaske, Senior Director Alliances & Channel Europe North, in ihrer Begrüßung betonte, „ein sehr wichtiger Baustein in der Wachstumsstrategie von Oracle“ sind. Wie einmalig diese Partnerschaft ist, betonte auch David Callaghan, Senior Vice President EMEA Alliances & Channel in seiner Keynote. Niemand sonst habe, so Callaghan, in ähnlichem Ausmaß wie Oracle, Hardware und Software tatsächlich integriert. So manche Anbieter würden zwar beides zusammenschnüren, aber bei weitem nicht so optimal abgestimmt und verflochten, wie beim „Red Stack“ von Oracle. Neben Keynotes von Jürgen Kunz, SVP Technology Northern Europe & Country Leader Germany, und Christian Werner, Senior Director Alliances & Channels Germany, zu Neuheiten und Entwicklungspotenzialen im Oracle Universum und den Präsentationen aus verschiedenen Spezialisierung-Fachgebieten, gab es sogar einen Blick in die Zukunft der IT: Der Informatiker Professor Hermann Maurer präsentierte nicht nur existierende und geplante Innovationen, etwa die berüchtigte Computerbrille, die bald das Smartphone abzulösen soll – eine ordentliche Portion Science-Fiction war auch dabei. Im Laufe des Tages nutzten diverse Partner die Möglichkeit, vor Ort den Test als OPN Implementation Specialist beim Testcenter Pearson Vue abzulegen. Viele Teilnehmer zeigten sich beeindruckt von den vielen guten Gesprächen untereinander und schöpften die Möglichkeit zum Networking und Erfahrungsaustausch voll aus. Bei einem so dichten Programm ist es natürlich schwierig, wirklich alles mitzunehmen. Daher haben wir die Präsentationen, die auf dem Oracle Partner Day gehalten wurden, hier in der Agenda noch einmal für Sie zusammengestellt. Spannend wurde es bei der Oracle Partner Award Ceremony: Zum zweiten Mal wurden dort deutsche Partner ausgezeichnet, die sich mit besonderem Engagement und Erfolg spezialisiert haben. Wer die glücklichen Gewinner sind und was ihr Unternehmen auszeichnet, lesen sie ebenfalls hier im Blog. Allen Siegern gratulieren wir noch einmal ganz herzlich! Nachdem es im voraus schon wilde Spekulationen gab, was sich wohl hinter der „Oracle Sports Challenge“ verbergen würde, wollen wir diese Frage auch hier auflösen: Wer nach dem vielen Sitzen Lust auf Bewegung hatte, konnte sich verschiedenen, mehr oder weniger sportlichen Herausforderungen stellen. Zu meistern waren verschiedene Geschicklichkeits-Spiele, unter anderem ein fast mannshoher „Oracle Stack“, den es in Yenga-Manier aufrecht zu erhalten galt, Torschüsse auf ein Tor, das von einem vollautomatischen Robo-Keeper bewacht wurde und eine Video-Wand mit einem spielerischen Reaktionstest rund um den „Red Stack“. Den ganzen Tag über konnten die Teilnehmer hinter QR-Codes versteckte Buchstaben sammeln und mit etwas Glück und Geschick einen von drei iPod Supernanos gewinnen. Abgerundet wurde das Programm durch Auftritte der Entertainment-Saxophonistinnen „Hot Sax Club“, der beeindruckenden Fußball-Freestyler mit ihrer Ballakrobatik, dem Close-up Magier Marc Gassert und unseren DJ, der für Stimmung sorgte. Eindrücke und Highlights vom Oracle Partner Day in Frankfurt sehen Sie hier, im Best-of-Video und in unserer Fotogalerie. Lassen Sie einen gelungenen Tag noch einmal Revue passieren – oder sehen Sie, was Sie alles verpasst haben. Aber: nicht traurig sein, der nächste Oracle Partner Day kommt bestimmt!

    Read the article

  • News, Perspektiven und jede Menge Gesprächsstoff - Der Oracle Partner Day 2012

    - by A&C Redaktion
    Was für ein Tag! Unter dem Motto „Maximize your Potential“ kamen über 470 Teilnehmerinnen und Teilnehmer beim Oracle Partner Day 2012 zusammen. Hier drehte sich alles um unsere Partner, die, wie Silvia Kaske, Senior Director Alliances & Channel Europe North, in ihrer Begrüßung betonte, „ein sehr wichtiger Baustein in der Wachstumsstrategie von Oracle“ sind. Wie einmalig diese Partnerschaft ist, betonte auch David Callaghan, Senior Vice President EMEA Alliances & Channel in seiner Keynote. Niemand sonst habe, so Callaghan, in ähnlichem Ausmaß wie Oracle, Hardware und Software tatsächlich integriert. So manche Anbieter würden zwar beides zusammenschnüren, aber bei weitem nicht so optimal abgestimmt und verflochten, wie beim „Red Stack“ von Oracle. Neben Keynotes von Jürgen Kunz, SVP Technology Northern Europe & Country Leader Germany, und Christian Werner, Senior Director Alliances & Channels Germany, zu Neuheiten und Entwicklungspotenzialen im Oracle Universum und den Präsentationen aus verschiedenen Spezialisierung-Fachgebieten, gab es sogar einen Blick in die Zukunft der IT: Der Informatiker Professor Hermann Maurer präsentierte nicht nur existierende und geplante Innovationen, etwa die berüchtigte Computerbrille, die bald das Smartphone abzulösen soll – eine ordentliche Portion Science-Fiction war auch dabei. Im Laufe des Tages nutzten diverse Partner die Möglichkeit, vor Ort den Test als OPN Implementation Specialist beim Testcenter Pearson Vue abzulegen. Viele Teilnehmer zeigten sich beeindruckt von den vielen guten Gesprächen untereinander und schöpften die Möglichkeit zum Networking und Erfahrungsaustausch voll aus. Bei einem so dichten Programm ist es natürlich schwierig, wirklich alles mitzunehmen. Daher haben wir die Präsentationen, die auf dem Oracle Partner Day gehalten wurden, hier in der Agenda noch einmal für Sie zusammengestellt. Spannend wurde es bei der Oracle Partner Award Ceremony: Zum zweiten Mal wurden dort deutsche Partner ausgezeichnet, die sich mit besonderem Engagement und Erfolg spezialisiert haben. Wer die glücklichen Gewinner sind und was ihr Unternehmen auszeichnet, lesen sie ebenfalls hier im Blog. Allen Siegern gratulieren wir noch einmal ganz herzlich! Nachdem es im voraus schon wilde Spekulationen gab, was sich wohl hinter der „Oracle Sports Challenge“ verbergen würde, wollen wir diese Frage auch hier auflösen: Wer nach dem vielen Sitzen Lust auf Bewegung hatte, konnte sich verschiedenen, mehr oder weniger sportlichen Herausforderungen stellen. Zu meistern waren verschiedene Geschicklichkeits-Spiele, unter anderem ein fast mannshoher „Oracle Stack“, den es in Yenga-Manier aufrecht zu erhalten galt, Torschüsse auf ein Tor, das von einem vollautomatischen Robo-Keeper bewacht wurde und eine Video-Wand mit einem spielerischen Reaktionstest rund um den „Red Stack“. Den ganzen Tag über konnten die Teilnehmer hinter QR-Codes versteckte Buchstaben sammeln und mit etwas Glück und Geschick einen von drei iPod Supernanos gewinnen. Abgerundet wurde das Programm durch Auftritte der Entertainment-Saxophonistinnen „Hot Sax Club“, der beeindruckenden Fußball-Freestyler mit ihrer Ballakrobatik, dem Close-up Magier Marc Gassert und unseren DJ, der für Stimmung sorgte. Eindrücke und Highlights vom Oracle Partner Day in Frankfurt sehen Sie hier, im Best-of-Video und in unserer Fotogalerie. Lassen Sie einen gelungenen Tag noch einmal Revue passieren – oder sehen Sie, was Sie alles verpasst haben. Aber: nicht traurig sein, der nächste Oracle Partner Day kommt bestimmt!

    Read the article

  • Just another web startup - platform comparison

    - by Holland
    I'm looking to do a web startup which involves something along the lines of an ecommerce site, yet a little more in depth than that. While it's something that I would rather not go into detail with in terms of the initial idea, I can specify (on a basic level) what would be required of the website. If you have any observations or opinions derived from personal experience, which relate to what you see here, I'd appreciate it if you could share these. Paypal's API interaction (definitely). From what I've read about their API, integration with it into their website is VERY expensive, so I'd probably hold off on that until I've (hopefully) generated money and write my own simple credit-card interaction system. SQL Backend (obviously) PostgreSQL seems like a pretty good choice, as from what I've read, it's structure is a bit more "object-oriented" than, say, MySQL. Then again, I've used MySQL before and haven't had much problem with it whatsoever. Would it be worth learning PostgreSQL for this purpose? Java or .Net implementation (Preferably Mono, so I can use .Net while hosting the website using Apache). The reason for this is because, frankly, while I know PHP is a great platform to develop websites with, I hate developing with it. Before someone chimes in and flames me for saying that, note that I have nothing against the language, I just don't like it for my purposes. While Mono may be good to go with, I'm aware that ASP.Net MVC 3 hasn't been released for Mono yet, which may be a pain to work with, without their Razor syntax. Ontop of that, it seems Java is completely FULL of class libraries which deal with web development, that can be downloaded from the web. If anyone has any experience with these, I'd appreciate if that were posted. From what I've read about Spring and Struts2, they seem to be the best out there - especially since they're (AFAIK) MVC. I've considered Python and Django, which do seem REALLY nice, but I don't know much Python, and I'd rather start with something that I already know (language-wise; not framework-wise) than dive into learning a language AND a new framework. I'd REALLY like to be able to host my website via Apache, rather than using Windows Server or anything like that, as, frankly, I hate their setup. I'm not dissing it in any way, shape, or form, I'm just saying I dislike it. <3 terminal config. If there is a good reason to with Windows Server, however, I'd be willing to learn it. C# has a lot of things that Java appears not to have, including Delegates, unsigned types, and LINQ. Is there anything that Java has which can counter these?

    Read the article

  • Windows Azure Learning Plan - Architecture

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with what an Architect needs to know about Windows Azure.   General Architectural Guidance Overview and general  information about Azure - what it is, how it works, and where you can learn more. Cloud Computing, A Crash Course for Architects (Video) http://www.msteched.com/2010/Europe/ARC202 Patterns and Practices for Cloud Development http://msdn.microsoft.com/en-us/library/ff898430.aspx Design Patterns, Anti-Patterns and Windows Azure http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/27/design-patterns-anti-patterns-and-windows-azure.aspx Application Patterns for the Cloud http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx Architecting Applications for High Scalability (Video) http://www.msteched.com/2010/Europe/ARC309 David Aiken on Azure Architecture Patterns (Video) http://blogs.msdn.com/b/architectsrule/archive/2010/09/09/arcast-tv-david-aiken-on-azure-architecture-patterns.aspx Cloud Application Architecture Patterns (Video) http://blogs.msdn.com/b/bobfamiliar/archive/2010/10/19/cloud-application-architecture-patterns-by-david-platt.aspx 10 Things Every Architect Needs to Know about Windows Azure http://geekswithblogs.net/iupdateable/archive/2010/10/20/slides-and-links-for-windows-azure-platform-session-at-software.aspx Key Differences Between Public and Private Clouds http://blogs.msdn.com/b/kadriu/archive/2010/10/24/key-differences-between-public-and-private-clouds.aspx Microsoft Application Platform at a Glance http://blogs.msdn.com/b/jmeier/archive/2010/10/30/microsoft-application-platform-at-a-glance.aspx Windows Azure is not just about Roles http://vikassahni.wordpress.com/2010/11/17/windows-azure-is-not-just-about-roles/ Example Application for Windows Azure http://msdn.microsoft.com/en-us/library/ff966482.aspx Implementation Guidance Practical applications for the architect to consider 5 Enterprise steps for adopting a Platform as a Service http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0 Performance-Based Scaling in Windows Azure http://msdn.microsoft.com/en-us/magazine/gg232759.aspx Windows Azure Guidance for the Development Process http://blogs.msdn.com/b/eugeniop/archive/2010/04/01/windows-azure-guidance-development-process.aspx Microsoft Developer Guidance Maps http://blogs.msdn.com/b/jmeier/archive/2010/10/04/developer-guidance-ia-at-a-glance.aspx How to Build a Hybrid On-Premise/In Cloud Application http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx A Common Scenario of Multi-instances in Windows Azure http://blogs.msdn.com/b/windows-azure-support/archive/2010/11/03/a-common-scenario-of-multi_2d00_instances-in-windows-azure-.aspx Slides and Links for Windows Azure Platform Best Practices http://geekswithblogs.net/iupdateable/archive/2010/09/29/slides-and-links-for-windows-azure-platform-best-practices-for.aspx AppFabric Architecture and Deployment Topologies guide http://blogs.msdn.com/b/appfabriccat/archive/2010/09/10/appfabric-architecture-and-deployment-topologies-guide-now-available-via-microsoft-download-center.aspx Windows Azure Platform Appliance http://www.microsoft.com/windowsazure/appliance/ Integrating Cloud Technologies into Your Organization Interoperability with Open Source and other applications; business and cost decisions Interoperability Labs at Microsoft http://www.interoperabilitybridges.com/ Windows Azure Service Level Agreements http://www.microsoft.com/windowsazure/sla/

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • Before the Summit of 2012

    - by Ajarn Mark Caldwell
    Today, Monday, was the first day of the PASS Summit Preconference training events, but instead I spent the day at the free SQL in the City event put on by Red Gate. For me this was not a financial decision (pre-con sessions cost extra above the general Summit registration) but rather a matter of interest.  I had already included money for pre-cons in this year’s training budget, but none of them really stood out to me, so even if the Red-Gate event were not going on at the same time, I probably would not have gone to any pre-cons this year.  However, the topics being presented at the SQL in the City event were of great interest to me.  There promised to be good information on Continuous Integration and automated deployment of database changes, which lately has been a real hot topic at my work.  And indeed, Red-Gate announced the release of a new tool (still in Early Access Program…a.k.a. Beta) which is called the Deployment Manager.  Since we are in the middle of a TFS implementation project, it will be interesting to see how this plays out and compares to what we put together with the automated builds in TFS.  But, as I understand it, the primary focus of Deployment Manager is not to be the Build process (Red Gate uses JetBrains’ Team City for that in their shop) but rather to aid in the deployment of those build packages, as well as providing easy rollback and a good visualization of which versions of software are in which environments.  It looks promising and I’ve already downloaded the installer package to play with it later. Overall, I was quite impressed with the SQL in the City event.  Having heard many current and past members of the PASS Board of Directors describe the challenges of putting on a large conference, and the growing pains that the PASS Summit has gone through, I am even more impressed that the Red Gate event ran as smoothly as it did.  And it is quite impressive the amount of money that Red Gate must have spent given that this was a no-charge event to attend, they had a very nice hot lunch, and the after-event drinks celebration.  Well done, folks! Of course it was great to hear from a variety of speakers.  Today I listened to some folks from Red Gate like Grant Fritchey (blog | @GFritchey) and David Atkinson (Product Manager for SQL Source Control and now the Deployment Manager tool set); and also Brent Ozar (blog | @BrentO) and Buck Woody (blog | @BuckWoody).  By the way, if you have never seen either Brent or Buck speak, you really should.  Different styles, but both are very entertaining and educational at the same time.  I love Buck’s sense of humor (here’s a tip…don’t be late to Buck’s session or you’ll become part of the presentation) and I praise Brent’s slides.  Brent’s style very much reminds me of that espoused by Garr Reynolds on his Presentation Zen blog (and book) and I am impressed that he can make a technical presentation so engaging. It was a great day, a great way to kick off the week, and I am excited to get into the full Summit!

    Read the article

  • SQL SERVER – Difference Between CURRENT_TIMESTAMP and GETDATE() – CURRENT_TIMESTAMP Equivalent in SQL Server

    - by pinaldave
    A common question – I often get from Oracle/MySQL Professionals: “What is the Equivalent to CURRENT_TIMESTAMP in SQL Server?” Here is a common question I often get from SQL Server Professionals: “What are differences between Difference Between CURRENT_TIMESTAMP and GETDATE ()?” Very simple question but have showed up so frequently that I feel like to write about it. Well in SQL Server GETDATE() is Equivalent to CURRENT_TIMESTAMP. However, if you use CURRENT_TIMESTAMP in your select statement it will work fine. You can see in the above example – both of them returns the same value. Now let us go to next question regarding difference between GETDATE and CURRENT_TIMESTAMP. Well, the matter of the fact, there is no difference between them in SQL Server (Reference Link). CURRENT_TIMESTAMP is an ANSI SQL function, whereas GETDATE is T-SQL implementation of the same function. Both of them derive value from the operating system of the computer on which SQL Server instance is running. Above discussion prompts another question – in this case, what should one use GETDATE or CURRENT_TIMESTAMP? Well, this is indeed tricky and interesting question. I think I am very comfortable using the GETDATE () so I will go to use it but a matter of the fact there is no right or wrong answer. If you want to follow ancient saying “When in Rome, do as the Romans do”, I suggest using the GETDATE (), or continue using CURRENT_TIMESTAMP. With that said, there is one very important property we all need to keep in mind. If you use CURRENT_TIMESTAMP while creating an object, they are automatically converted to GETDATE() and stored internally. To illustrate what I am suggesting here is the example - Create a table using the following script CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (CURRENT_TIMESTAMP) FOR [Cold2] GO Now go to SSMS and generate the script for the table and you will notice following syntax. CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (GETDATE()) FOR [Cold2] GO You can notice that SQL Server have automatically converted CURRENT_TIMESTAMP to GETDATE(). I guess this gives us an idea how they behave. Now go ahead and make your choice! Do let me know which one will you use CURRENT_TIMESTAMP or GETDATE () in the comments area. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • Issue in setting up VPN connection (IKEv1) using android (ICS vpn client) with Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using android (ICS vpn client) and Strongswan 4.5.0 server. Below is the set up: Strongswan server is running on ubuntu linux machine which is connected to some wifi hotspot. Using the steps in this guide link, I generated CA, server and client certificate. Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on android device. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212, android device eth0 interface ip address: 192.168.43.62; Android device is also attached with the same wifi hotspot. On the Android device, I uses IPsec Xauth RSA option for setting up VPN authentication configuration. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.62 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add      With the above configurations when I enable VPN on android device, VPN connection is not successful and it gets timed out in Authentication phase. I ran wireshark on both the android device and strongswan server, from the tcpdump below are the observations. Initially Identity Protection (Main mode) exchanges happens between device and server and all are successful. After all successful Identity Protection (Main mode) exchanges server is sending Transaction (Config mode) to device. In reply android device is sending Informational message instead of Transaction (Config mode) message. Further server is keep on sending Transaction (Config mode) message and device is again sending Identity Protection (Main mode) messages. Finally timeout happens and connection fails. I also capture Strongswan server logs and below are the snippets from the server logs which also verifies the same(described above). Apr 27 21:09:40 Linux pluto[12105]: | **parse ISAKMP Message: Apr 27 21:09:40 Linux pluto[12105]: | initiator cookie: Apr 27 21:09:40 Linux pluto[12105]: | 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | responder cookie: Apr 27 21:09:40 Linux pluto[12105]: | 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | next payload type: ISAKMP_NEXT_HASH Apr 27 21:09:40 Linux pluto[12105]: | ISAKMP version: ISAKMP Version 1.0 Apr 27 21:09:40 Linux pluto[12105]: | exchange type: ISAKMP_XCHG_INFO Apr 27 21:09:40 Linux pluto[12105]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 27 21:09:40 Linux pluto[12105]: | message ID: a2 80 ad 82 Apr 27 21:09:40 Linux pluto[12105]: | length: 92 Apr 27 21:09:40 Linux pluto[12105]: | ICOOKIE: 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | RCOOKIE: 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | peer: c0 a8 2b 3e Apr 27 21:09:40 Linux pluto[12105]: | state hash entry 25 Apr 27 21:09:40 Linux pluto[12105]: | state object not found Apr 27 21:09:40 Linux pluto[12105]: packet from 192.168.43.62:500: Informational Exchange is for an unknown (expired?) SA Apr 27 21:09:40 Linux pluto[12105]: | next event EVENT_RETRANSMIT in 10 seconds for #9 Can anyone please provide update on this issue. Why the VPN connection gets timed out and why the ISAKMP exchanges are not proper between Android and strongswan server.

    Read the article

  • Does using GCC specific builtins qualify as incorporation within a project?

    - by DavidJFelix
    I understand that linking to a program licensed under the GPL requires that you release the source of your program under the GPL as well, while the LGPL does not require this. The terminology of the (L)GPL is very clear about this. #include "gpl_program.h" means you'd have to license GPL, because you're linking to GPL licensed code. And #include "lgpl_program.h" means you're free to license however you want, so that it doesn't explicitly prohibit linking to LGPL source. Now, my question about what isn't clear is: [begin question] GCC is GPL licensed, compiling with GCC, does not constitute "integration" into your program, as the GPL puts it; does using builtin functions (which are specific to GCC) constitute "incorporation" even though you haven't explicitly linked to this GPL licensed code? My intuition tells me that this isn't the intention, but legality isn't always intuitive. I'm not actually worried, but I'm curious if this could be considered the case. [end question] [begin aside] The reason for my equivocation is that GCC builtins like __builtin_clzl() or __builtin_expect() are an API technically and could be implemented in another way. For example, many builtins were replicated by LLVM and the argument could be made that it's not implementation specific to GCC. However, many builtins have no parallel and when compiled will link GPL licensed code in GCC and will not compile on other compilers. If you make the argument here that the API could be replicated by another compiler, couldn't you make that identical claim about any program you link to, so long as you don't distribute that source? I understand that I'm being a legal snake about this, but it strikes me as odd that the GPL isn't more specific. I don't see this as a reasonable ploy for proprietary software creators to bypass the GPL, as they'd have to bundle the GPL software to make it work, removing their plausible deniability. However, isn't it possible that if builtins don't constitute linking, then open source proponents who oppose the GPL could simply write a BSD/MIT/Apache/Apple licensed product that links to a GPL'd program and claim that they intend to write a non-GPL interface that is identical to the GPL one, preserving their BSD license until it's actually compiled? [end aside] Sorry for the aside, I didn't think many people would follow why I care about this if I'm not facing any legal trouble or implications. Don't worry too much about the hypotheticals there, I'm just extrapolating what either answer to my actual question could imply.

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Dependency Injection/IoC container practices when writing frameworks

    - by Dave Hillier
    I've used various IoC containers (Castle.Windsor, Autofac, MEF, etc) for .Net in a number of projects. I have found they tend to encourage a number of bad practices. Are there any established practices for IoC container use, particularly when providing a platform/framework? My aim as a framework writer is to make code as simple and as easy to use as possible. I'd rather write one line of code to construct an object than ten or even just two. For example, a couple of code smells that I've noticed and don't have good suggestions to: Large number of parameters (5) for constructors. Creating services tends to be complex; all of the dependencies are injected via the constructor - despite the fact that the components are rarely optional (except for maybe in testing). Lack of private and internal classes; this one may be a specific limitation of using C# and Silverlight, but I'm interested in how it is solved. It's difficult to tell what a frameworks interface is if all the classes are public; it allows me access to private parts that I probably shouldnt touch. Coupling the object lifecycle to the IoC container. It is often difficult to manually construct the dependencies required to create objects. Object lifecycle is too often managed by the IoC framework. I've seen projects where most classes are registered as Singletons. You get a lack of explicit control and are also forced to manage the internals (it relates to the above point, all classes are public and you have to inject them). For example, .Net framework has many static methods. such as, DateTime.UtcNow. Many times I have seen this wrapped and injected as a construction parameter. Depending on concrete implementation makes my code hard to test. Injecting a dependency makes my code hard to use - particularly if the class has many parameters. How do I provide both a testable interface, as well as one that is easy to use? What are the best practices?

    Read the article

  • The Politics of Junk Filtering

    - by mikef
    If the national postal service, such as the Royal Mail in the UK, were to go through your letters and throw away all the stuff it considered to be junk instead of delivering it to you, you might be rather pleased until you discovered that it took a too liberal decision about what was junk. Catalogs you'd asked for? Junk! Requests from charities? Who needs them! Parcels from competing carriers? Toss them away! The possibility for abuse for an agency that was in a monopolistic position is just too scary to tolerate. After all, the postal service could charge 'consultancy fees' to any sender who wanted to guarantee that his stuff got delivered, or they could even farm this out to other companies. Because Microsoft Outlook is just about the only email client used by the international business community in the west, its' SPAM filter is the final arbiter as to what gets read. My Outlook 2007, set to the default settings, junks all the perfectly innocent email newsletters that I subscribe to. Whereas Google Mail, Yahoo, and LIVE are all pretty accurate in detecting spam, Outlook makes all sorts of silly mistakes. The documentation speaks techno-babble about 'advanced heuristics', but the result boils down to an inaccurate mess. The more that Microsoft fiddles with it, the stickier the mess. To make matters worse, it still lets through obvious spam. The filter is occasionally updated along with other automatic 'security' updates you opt for automatic updates. As an editor for a popular online publication that provides a newsletter service, this is an obvious source of frustration. We follow all the best-practices we know about. We ensure that it is a trivial task to opt out of receiving it. We format the newsletter to the requirements of the Service Providers. We follow up, and resolve, every complaint. As a result, it gets delivered. It is galling to discover that, after all that effort, Outlook then often judges the contents to be junk on a whim, so you don't get to see it. A few days ago, Microsoft published the PST file format specification, under pressure from a European Union interoperability investigation by ECIS (the European Committee for Interoperable Systems). The objective was that other applications could then access existing PST files so as to migrate from existing Outlook installations to other solutions. Joaquín Almunia, the current competition commissioner, should now turn his attention to the more subtle problems of Microsoft Outlook. The Junk problem seems to have come from clumsy implementation of client-side spam filtering rather than from deliberate exploitation of a monopoly on the desktop email client for businesses, but it is a growing problem nonetheless. Cheers, Michael Francis

    Read the article

  • Fixing the #mvvmlight code snippets in Visual Studio 11

    - by Laurent Bugnion
    If you installed the latest MVVM Light version for Windows 8, you may encounter an issue where code snippets are not displayed correctly in the Intellisense popup. I am working on a fix, but for now here is how you can solve the issue manually. The code snippets MVVM Light, when installed correctly, will install a set of code snippets that are very useful to allow you to type less code. As I use to say, code is where bugs are, so you want to type as little of that as possible ;) With code snippets, you can easily auto-insert segments of code and easily replace the keywords where needed. For instance, every coder who uses MVVM as his favorite UI pattern for XAML based development is used to the INotifyPropertyChanged implementation, and how boring it can be to type these “observable properties”. Obviously a good fix would be something like an “Observable” attribute, but that is not supported in the language or the framework for the moment. Another fix involves “IL weaving”, which is a post-build operation modifying the generate IL code and inserting the “RaisePropertyChanged” instruction. I admire the invention of those who developed that, but it feels a bit too much like magic to me. I prefer more “down to earth” solutions, and thus I use the code snippets. Fixing the issue Normally, you should see the code snippets in Intellisense when you position your cursor in a C# file and type mvvm. All MVVM Light snippets start with these 4 letters. Normal MVVM Light code snippets However, in Windows 8 CP, there is an issue that prevents them to appear correctly, so you won’t see them in the Intellisense windows. To restore that, follow the steps: In Visual Studio 11, open the menu Tools, Code Snippets Manager. In the combobox, select Visual C#. Press Add… Navigate to C:\Program Files (x86)\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\SnippetsWin8 and select the CSharp folder. Press Select Folder. Press OK to close the Code Snippets Manager. Now if you type mvvm in a C# file, you should see the snippets in your Intellisense window. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Twitte API for Java - Hello Twitter Servlet (TOTD #178)

    - by arungupta
    There are a few Twitter APIs for Java that allow you to integrate Twitter functionality in a Java application. This is yet another API, built using JAX-RS and Jersey stack. I started this effort earlier this year and kept delaying to share because wanted to provide a more comprehensive API. But I've delayed enough and releasing it as a work-in-progress. I'm happy to take contributions in order to evolve this API and make it complete, useful, and robust. Drop a comment on the blog if you are interested or ping me at @arungupta. How do you get started ? Just add the following to your "pom.xml": <dependency> <groupId>org.glassfish.samples</groupId> <artifactId>twitter-api</artifactId> <version>1.0-SNAPSHOT</version></dependency> The implementation of this API uses Jersey OAuth Filters for authentication with Twitter and so the following dependencies are required if any API that requires authentication, which is pretty much all the APIs ;-) <dependency> <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-client</artifactId>     <version>${jersey.version}</version> </dependency> <dependency>     <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-signature</artifactId>     <version>${jersey.version}</version> </dependency> Once the dependencies are added to your project, inject Twitter  API in your Servlet (or any other Java EE component) as: @Inject Twitter twitter; Here is a simple non-secure invocation of the API to get you started: SearchResults result = twitter.search("glassfish", SearchResults.class);for (SearchResultsTweet t : result.getResults()) { out.println(t.getText() + "<br/>");} This code returns the tweets that matches the query "glassfish". The source code for the complete project can be downloaded here. Download it, unzip, and mvn package will build the .war file. And then deploy it on GlassFish or any other Java EE 6 compliant application server! The source code for the API also acts as the javadocs and can be checked out from here. A more detailed sample using security and several other API from this library is coming soon!

    Read the article

  • Siegertypen unter sich - beim Oracle Partner Day in Frankfurt

    - by A&C Redaktion
    WIR WARTEN AUF SIE! ORACLE PARTNER DAY - 29. OKTOBER 2012 Erfolg hat immer eine sportliche Komponente – Ehrgeiz, Wissen und Durchhaltevermögen sind mit entscheidend, um in die nächste Runde zu kommen. Sie als Partner können jetzt in einer neuen Liga spielen, denn ab sofort haben Sie die Möglichkeit, das gesamte Oracle Red Stack Produktportfolio – Software, Hardware und Applications – zu verkaufen. An diesem Tag erwarten Sie: David Callaghan, Senior Vice President EMEA Alliances & Channels Jürgen Kunz, Senior Vice President Northern Europe & Country Leader Germany Silvia Kaske, Senior Director Alliances & Channels Europe North Christian Werner, Senior Director Alliances & Channels Germany Sie haben Fragen an die Executives? Im Oracle Leaders Panel (ab 17 Uhr) werden diese live beantwortet. Schicken Sie uns Ihre Fragen einfach zu: per SMS an +49 176 84879149, per Email, via Facebook oder über Twitter.Das neue A&C Coaching Team steht. Sind Sie dabei?Spitzenleistung braucht eine breite Basis: die neue Mannschaftsaufstellung lernen Sie vor Ort live kennen. Holen Sie sich Unterstützung. Und profitieren Sie von einem exzellenten Netzwerk – Ihrer Partnerschaft auf dem Weg nach oben. Steigen Sie mit Alliances & Channels gemeinsam in die neue Liga auf, das mit seinem Produktportfolio am Markt seinesgleichen sucht. Nur noch wenige Tage bis zum Oracle Partner Day 2012!Aber noch nicht zu spät, um sich zu registrieren. Machen Sie sich jetzt auf den Weg an die Spitze! Melden Sie sich hier gleich an.Noch besteht die Möglichkeit für ein 1:1 Meeting mit ausgewählten Oracle Managern vor Ort. Kreuzen Sie Ihren Wunschpartner an, mit dem Sie sich gerne austauschen möchten. Sie machen den Test. Wir zahlen die Testgebühr!Nutzen Sie die Gelegenheit, sich direkt zum OPN Implementation Specialist zu akkreditieren! Melden Sie sich jetzt zum offiziellen Implementierungstest beim Testcenter Pearson Vue vor Ort beim Oracle Partner Day an. Wählen Sie Ihre Fachbereiche aus Fusion Middleware, Applications, Hardware, Datenbank und gehen Sie als Implementierungsspezialist nach Hause. Jetzt kommt der Ball ins Rollen! Sportlicher Abschluss in der Commerzbank ArenaNach den fachlichen Themen des Tages wollen wir den Abend in sportlichem Rahmen mit Ihnen ausklingen lassen. Die „Oracle Red Stack Arena Sport Challenge“ hält für Sie einige Überraschungen bereit. Nur so viel sei verraten: Es wird auch ein mit QR-Codes verknüpfes Gewinnspiel mit attraktiven Preisen geben. Bereiten Sie Ihr Smartphone dafür vor. Wir freuen uns auf Sie!

    Read the article

  • Siegertypen unter sich - beim Oracle Partner Day in Frankfurt

    - by A&C Redaktion
    WIR WARTEN AUF SIE! ORACLE PARTNER DAY - 29. OKTOBER 2012 Erfolg hat immer eine sportliche Komponente – Ehrgeiz, Wissen und Durchhaltevermögen sind mit entscheidend, um in die nächste Runde zu kommen. Sie als Partner können jetzt in einer neuen Liga spielen, denn ab sofort haben Sie die Möglichkeit, das gesamte Oracle Red Stack Produktportfolio – Software, Hardware und Applications – zu verkaufen. An diesem Tag erwarten Sie: David Callaghan, Senior Vice President EMEA Alliances & Channels Jürgen Kunz, Senior Vice President Northern Europe & Country Leader Germany Silvia Kaske, Senior Director Alliances & Channels Europe North Christian Werner, Senior Director Alliances & Channels Germany Sie haben Fragen an die Executives? Im Oracle Leaders Panel (ab 17 Uhr) werden diese live beantwortet. Schicken Sie uns Ihre Fragen einfach zu: per SMS an +49 176 84879149, per Email, via Facebook oder über Twitter.Das neue A&C Coaching Team steht. Sind Sie dabei?Spitzenleistung braucht eine breite Basis: die neue Mannschaftsaufstellung lernen Sie vor Ort live kennen. Holen Sie sich Unterstützung. Und profitieren Sie von einem exzellenten Netzwerk – Ihrer Partnerschaft auf dem Weg nach oben. Steigen Sie mit Alliances & Channels gemeinsam in die neue Liga auf, das mit seinem Produktportfolio am Markt seinesgleichen sucht. Nur noch wenige Tage bis zum Oracle Partner Day 2012!Aber noch nicht zu spät, um sich zu registrieren. Machen Sie sich jetzt auf den Weg an die Spitze! Melden Sie sich hier gleich an.Noch besteht die Möglichkeit für ein 1:1 Meeting mit ausgewählten Oracle Managern vor Ort. Kreuzen Sie Ihren Wunschpartner an, mit dem Sie sich gerne austauschen möchten. Sie machen den Test. Wir zahlen die Testgebühr!Nutzen Sie die Gelegenheit, sich direkt zum OPN Implementation Specialist zu akkreditieren! Melden Sie sich jetzt zum offiziellen Implementierungstest beim Testcenter Pearson Vue vor Ort beim Oracle Partner Day an. Wählen Sie Ihre Fachbereiche aus Fusion Middleware, Applications, Hardware, Datenbank und gehen Sie als Implementierungsspezialist nach Hause. Jetzt kommt der Ball ins Rollen! Sportlicher Abschluss in der Commerzbank ArenaNach den fachlichen Themen des Tages wollen wir den Abend in sportlichem Rahmen mit Ihnen ausklingen lassen. Die „Oracle Red Stack Arena Sport Challenge“ hält für Sie einige Überraschungen bereit. Nur so viel sei verraten: Es wird auch ein mit QR-Codes verknüpfes Gewinnspiel mit attraktiven Preisen geben. Bereiten Sie Ihr Smartphone dafür vor. Wir freuen uns auf Sie!

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >