Search Results

Search found 504 results on 21 pages for 'mysterious jean'.

Page 19/21 | < Previous Page | 15 16 17 18 19 20 21  | Next Page >

  • TransactionScope question - how can I keep the DTC from getting involved in this?

    - by larryq
    (I know the circumstances surrounding the DTC and promoting a transaction can be a bit mysterious to those of us not in the know, but let me show you how my company is doing things, and if you can tell me why the DTC is getting involved, and if possible, what I can do to avoid it, I'd be grateful.) I have code running on an ASP.Net webserver. We have one database, SQL 2008. Our data access code looks something like this-- We have a data access layer that uses a wrapper object for SQLConnections and SQLCommands. Typical use looks like this: void method1() { objDataObject = new DataAccessLayer(); objDataObject.Connection = SomeConnectionMethod(); SqlCommand objCommand = DataAccessUtils.CreateCommand(SomeStoredProc); //create some SqlParameters, add them to the objCommand, etc.... objDataObject.BeginTransaction(IsolationLevel.ReadCommitted); objDataObject.ExecuteNonQuery(objCommand); objDataObject.CommitTransaction(); objDataObject.CloseConnection(); } So indeed, a very thin wrapper around SqlClient, SqlConnection etc. I want to run several stored procs in a transaction, and the wrapper class doesn't allow me access to the SqlTransaction so I can't pass it from one component to the next. This led me to use a TransactionScope: using (TransactionScope tx1 = new TransactionScope(TransactionScope.RequiresNew)) { method1(); method2(); method3(); tx1.Complete(); } When I do this, the DTC gets involved, and unfortunately our webservers don't have "allow remote clients" enabled in the MSDTC settings-- getting IT to allow that will be a fight. I'd love to avoid DTC becoming involved but can I do it? Can I leave out the transactional calls in methods1-3() and just let the TransactionScope figure it all out?

    Read the article

  • Getting the name of a child class in the parent class (static context)

    - by Benoit Myard
    Hi everybody, I'm building an ORM library with reuse and simplicity in mind; everything goes fine except that I got stuck by a stupid inheritance limitation. Please consider the code below: class BaseModel { /* * Return an instance of a Model from the database. */ static public function get (/* varargs */) { // 1. Notice we want an instance of User $class = get_class(parent); // value: bool(false) $class = get_class(self); // value: bool(false) $class = get_class(); // value: string(9) "BaseModel" $class = __CLASS__; // value: string(9) "BaseModel" // 2. Query the database with id $row = get_row_from_db_as_array(func_get_args()); // 3. Return the filled instance $obj = new $class(); $obj->data = $row; return $obj; } } class User extends BaseModel { protected $table = 'users'; protected $fields = array('id', 'name'); protected $primary_keys = array('id'); } class Section extends BaseModel { // [...] } $my_user = User::get(3); $my_user->name = 'Jean'; $other_user = User::get(24); $other_user->name = 'Paul'; $my_user->save(); $other_user->save(); $my_section = Section::get('apropos'); $my_section->delete(); Obviously, this is not the behavior I was expecting (although the actual behavior also makes sense).. So my question is if you guys know of a mean to get, in the parent class, the name of child class.

    Read the article

  • Problems running XNA game on 64-bit Windows 7

    - by Tesserex
    I'm having problems getting my game engine to run on my brother's machine, which is running 64-bit Windows 7. I'm developing on 32-bit XP SP2. My app uses XNA, FMOD.NET, and another dll I wrote entirely in C#. Everything is targeted to x86, not AnyCPU. I've read that this is required for XNA to work because there is no 64-bit xna framework. I recompiled FMOD.NET as x86 as well and made sure to be using the 32-bit version of the native dll. So I don't see any problems there. However when he tries to run my app, it gives an error which is mysterious, but not unheard of. A FileNotFoundException with an empty file name, and the top of the stack trace is in my main form constructor. The message is The specified module could not be found. (Exception from HRESULT: 0x8007007E) I found some threads online about this error, all with very vague, mixed, and fuzzy responses that don't really help me. Most remind people to target x86. Some say check that they have all the dlls necessary. I gave my brother Microsoft.Xna.Framework.dll, but does he need to install the entire XNA redistributable package? When I take everything I sent him and stick it in a random directory, it still runs fine for me. I developed the game in VS2008, not in game studio, using XNA 3.0 and a Windows Forms control that uses XNA drawing which I found in an msdn tutorial. I would also like to avoid requiring a full installer if possible. Any insight? Please?

    Read the article

  • Prevent two users from editing the same data

    - by Industrial
    Hi everyone, I have seen a feature in different web applications including Wordpress (not sure?) that warns a user if he/she opens an article/post/page/whatever from the database, while someone else is editing the same data simultaneously. I would like to implement the same feature in my own application and I have given this a bit of thought. Is the following example a good practice on how to do this? It goes a little something like this: 1) User A enters a the editing page for the mysterious article X. The database tableEvents is queried to make sure that no one else is editing the same page for the moment, which no one is by then. A token is then randomly being generated and is inserted into a database table called Events. 1) User B also want's to make updates to the article X. Now since our User A already is editing the article, the Events table is queried and looks like this: | timestamp | owner | Origin | token | ------------------------------------------------------------ | 1273226321 | User A | article-x | uniqueid## | 2) The timestamp is being checked. If it's valid and less than say 100 seconds old, a message appears and the user cannot make any changes to the requested article X: Warning: User A is currently working with this article. In the meantime, editing cannot be done. Please do something else with your life. 3) If User A decides to go on and save his changes, the token is posted along with all other data to update the database, and toggles a query to delete the row with token uniqueid##. If he decides to do something else instead of committing his changes, the article X will still be available for editing in 100 seconds for User B Let me know what you think about this approach! Wish everyone a great weekend!

    Read the article

  • JQuery: how to store records id for data from ajax query in dynamicly created html elements

    - by grapkulec
    Probably question title is rather cryptic but I will try to explain myself here so please bare with me :) Let's assume this configuration: server-side is PHP application responding for requests with data (list of items, single item details, etc.) in json format client-side is JQuery application sending ajax request to that PHP app and creating html content corresponding with received data So, for example: client requests "list of all animals with names staring with 'A'", gets the json response from server, and for every "animal" creates some html gizmo like div with animal description or something like that. It doesn't really matter what html element it will be but it has to point exactly to specific record by "containing" id of that record. And here is my dilemma: is it good solution to use "id" property for that? So it would be like: <div id="10" class="animal"> <p> This is animal of very mysterious kind... </p> </div> <div id="11" class="animal"> <p> And this one is very common to our country... </p> </div> where id="10" is of course indication that this is representation of record with id = 10. Or maybe I should store this record id in some custom made tag like <record_id>10</record_id> and leave an "id" strictly for what it was meant to be (css selector)? I need that record id for further stuff like updating database with some user input or deleting some of "animals" or creating new ones or anything that will be needed. All manipulations will be done with JQuery and ajax requests and responses will be visualized also with dynamic creation of html interface. I'm sure that somebody had to deal with that kind of stuff before so I would be grateful for some tips on that topic.

    Read the article

  • How do I produce a screenshot of a flash swf on the server?

    - by indignant
    I'm writing a flash app using the open source tools. I would like to load a data file in to the app and capture a screenshot of the stage on the server. The only part that seems mysterious is running the app on the server. In fact, I don't even care if it's the same app running on the server and in the browser--if I can use the flash stage and drawing routines to produce an image server-side, I'm happy. If I have to delve in to flex, fine. Right now I'm having problems finding any starting point at all. I gather Adobe has some commercial products that may fit the bill, but I'd like to stick with open source, apache, and linux. I know this is probably possible with haxe/neko, but I'd like to use more mainstream tools if possible. Am I asking too much? EDIT/CLARIFICATION: Many thanks for the responses so far, but I think I've been a bit muddy in my description. I've already written the actual stage-grabbing stuff using the same PNGEncoder class as was suggested. The problem is in actually running the swf on the server side. I don't want to let the client take the screen shot itself, because this opens up the possibility of the client maliciously submitting a screenshot which does not correspond to what is on the stage, that is, I don't want users uploading porn. If I could run the the actionscript code on the server, then I could generate the screenshot from my data files and be sure that the screenshot matches the data, but I have no idea how to run the actionscript or swf on the server.

    Read the article

  • Debugging instance of another thread altering my data

    - by Mick
    I have a huge global array of structures. Some regions of the array are tied to individual threads and those threads can modify their regions of the array without having to use critical sections. But there is one special region of the array which all threads may have access to. The code that accesses these parts of the array needs to carefully use critical sections (each array element has its own critical section) to prevent any possibility of two threads writing to the structure simultaneously. Now I have a mysterious bug I am trying to chase, it is occurring unpredictably and very infrequently. It seems that one of the structures is being filled with some incorrect number. One obvious explanation is that another thread has accidentally been allowed to set this number when it should be excluded from doing so. Unfortunately it seems close to impossible to track this bug. The array element in which the bad data appears is different each time. What I would love to be able to do is set some kind of trap for the bug as follows: I would enter a critical section for array element N, then I know that no other thread should be able to touch the data, then (until I exit the critical section) set some kind of flag to a debugging tool saying "if any other thread attempts to change the data here please break and show me the offending patch of source code"... but I suspect no such tool exists... or does it? Or is there some completely different debugging methodology that I should be employing.

    Read the article

  • Error 1009 after gotoAndStop - stage instance never gets added

    - by ELambda
    I've been through the forums for hours (days?) searching on 1009 errors, but I remain stumped on this. It seems very mysterious and I would LOVE some help if you have any ideas. I have a single .swf that is 7 frames long - each frame represents a different "page" and you can switch pages through a menu widget in the top right corner. The menu widget calls gotoAndPlay( "frame" ). Everything works fine except when I switch from one particular frame to another. Then, during initialization of the new frame (setting some visible properties on various items, in actionscript), I get the dreaded 1009 error on a specific stage instance, a dynamic text instance i_word. Here's what I've tried so far: made sure the actionscript for the new frame starts with a stop() statement before starting initialization - no dice tried changing i_word into a movie_clip instead of dynamic text, made sure it was exported for actionscript - no difference. (I also have 2 other dynamic text instances on the same page that don't seem to cause a problem) added an ENTER_FRAME listener when the new frame is loaded, in case the problem was a timing issue. Put in a big if statement checking if i_word and other instances are not null before proceeding to initialization. It never enters the if, because i_word NEVER gets added. I added trace statements for all instances that are null, and it is the only one. If I remove all references to i_word in my actionscript, everything else is not null, and things go forward. The text for i_word even shows up on the screen in that case. tried renaming i_word - no dice tried deleting the layer i_word was on and adding a new layer - no dice It feels like there is a serious Gremlin in my flash file somewhere. Or maybe I'm missing something obvious. Let me know if you have any ideas...I'd be so grateful. Thank you! Elambda

    Read the article

  • Saving and restoring OpenGL model-view

    - by Tom
    I am a new-comer to OpenGL, and much of it remains mysterious to my feeble brain. I have been studying the NeHe demos as well as the Red Book. I am writing an Android application that displays the Earth in the center of the screen. The user can rotate the Earth about any axis (much like a very simple "Google Earth"). This code is working (I based it on the NeHe examples). Now I want to add a feature; the user should be able to save the current model orientation, then later return to that same orientation. For example, the user may save the Earth orientation such that the viewer is looking down at her hometown, and north-east is "up". How do I do this with OpenGL-ES? To capture and save the current orientation, my code could get the current model-view transformation matrix - I think I understand how to do that. But later on how do I apply that saved matrix to restore the view?

    Read the article

  • Destroy? Delete? What's going on here? Rails 2.3.5

    - by Steve
    I am new to rails. My rails version is 2.3.5. I found usage like: In controller, a destroy method is defined and in view, you can use :action = "delete" to fire that method. Isn't the action name has to be the same as the method name? Why delete is mapped to destroy? Again, in my controller, I define a method called destroy to delete a record. In a view, I have <%= link_to "remove", :action = 'destroy', :id = myrecord %. But it never works in practice. Every time I press the remove link, it redirects me to the show view, showing the record's content. I am pretty sure that my destroy method is: def destroy @myobject = MyObject.find(params[:id]) @myobject.destroy @redirect_to :action = 'index' end If I change the method name from destroy to something like remove_me and change the action name to remove_me in the view, everything works as expected. In the above two wired problems, I am sure there is no tricky rountting set in my configuration. All in all, seems the destroy and delete are mysterious keywords in rails. Anyone can explain this to me? Thank you very much.

    Read the article

  • Troubleshooting an unstable internet connection

    - by Konrad Rudolph
    My MacBook Pro running OS X (10.9, but I had the same problem before) is connected to a Belkin router via WiFi and, using Virgin Media as the ISP, to the internet. The connection is extremely unstable – on some days, I get a ping timeout every few seconds. In addition, some domains seem to suffer general connectivity issues. For instance, I often find that while the youtube.com website loads, none of the videos (which are hosted on a separate domain) do. At other times, videos load but always fail to buffer, even though the actual connection speed is ok, even though I’ve disabled dash playback. Since I’m living in a rented room and the ISP contract isn’t actually mine I’ve got only limited possibilities of addressing the problem. In particular, I have no access to the router configuration and my non tech savvy landlady, while sympathetic, is not in a great hurry to hand the problem over to the ISP’s customer support. What’s more, I seem to be the only person in the house experiencing these problems – but I can imagine that this is simply because I’m the only one who’s using the internet continuously. I’m searching for specific tests that might be able to pinpoint – and ideally solve – the problem. So far all I’ve managed to do is establish that Virgin is routing my traffic in mysterious ways. Here’s an excerpt from traceroute google.co.uk. It’s worth mentioning that the host name doesn’t seem to matter a lot, the trace route is always the same. traceroute: Warning: google.co.uk has multiple addresses; using 62.254.36.148 traceroute to google.co.uk (62.254.36.148), 64 hops max, 52 byte packets 1 (192.168.2.1) 1.112 ms 1.300 ms 2.359 ms 2 10.100.32.1 (10.100.32.1) 11.926 ms 10.217 ms 24.987 ms 3 cmbg-core-1a-ae3-610.network.virginmedia.net (80.1.202.93) 28.809 ms * 66.653 ms 4 popl-bb-1b-ae16-0.network.virginmedia.net (212.43.163.141) 13.759 ms 126.504 ms 20.472 ms 5 nrth-bb-1b-et-010-0.network.virginmedia.net (62.253.175.57) 28.357 ms 16.398 ms 42.387 ms 6 nrth-bb-1c-ae1-0.network.virginmedia.net (62.253.174.110) 27.441 ms 15.622 ms 12.044 ms 7 lutn-icdn-1-ae0-0.network.virginmedia.net (62.253.175.82) 16.678 ms 28.463 ms 28.253 ms 8 * * * 9 * * * 10 * * * ^C If I let it, this goes on until the end of time. It never seems to reach a destination. Is this normal? A friend living in the same town who is also with Virgin Media has a more conventional traceroute output: 7 hops to google.co.uk, all of which send the ICMP TIME_EXCEEDED response. The obvious fix – rebooting the router – doesn’t seem to help. As far as I can tell, the WiFi connection is stable (I can always ping the router) so the problem is further downstream. I’ve tried using an alternative DNS before (OpenDNS) but if anything, this made things worse. In fact, it made all Google services nigh unreachable.

    Read the article

  • Installing MySQL-Ruby on Linux and Ruby 1.9.2

    - by Klam
    I am having an absurdly difficult time getting MySQL-Ruby to install on RedHat 4 using Ruby 1.9.2. I am behind a company proxy that prevents pretty much any package tool from connecting to external repositories so "gem install mysql" isn't going to cut it. I have tried installing the mysql-ruby gem locally but it fails with a mysterious: $gem install mysql-2.8.1.gem Building native extensions. This could take a while... ERROR: Error installing mysql-2.8.1.gem: ERROR: Failed to build gem native extension. /ns/local/apps/internal/SWS/MetricsPublisher/ruby/bin/ruby extconf.rb checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. I have also tried building the module myself by following the included readme. The results: $ruby extconf.rb --with-mysql-include=/path_to_my_sql_headers/mysql/include/ --with-mysql-lib=/path_to_my_sql_lib/mysql/lib/ checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Does anybody have any ideas? Quite frankly, I don't even care if MySQL-Ruby specifically works, I just want ANY means of connecting to a MySQL DB through a ruby call in ruby 1.9. Thanks.

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • Using the new CSS Analyzer in JavaFX Scene Builder

    - by Jerome Cambon
    As you know, JavaFX provides from the API many properties that you can set to customize or make your components to behave as you want. For instance, for a Button, you can set its font, or its max size.Using Scene Builder, these properties can be explored and modified using the inspector. However, JavaFX also provides many other properties to have a fine grained customization of your components : the css properties. These properties are typically set from a css stylesheet. For instance, you can set a background image on a Button, change the Button corners, etc... Using Scene Builder, until now, you could set a css property using the inspector Style and Stylesheet editors. But you had to go to the JavaFX css documentation to know the css properties that can be applied to a given component. Hopefully, Scene Builder 1.1 added recently a very interesting new feature : the CSS Analyzer.It allows you to explore all the css properties available for a JavaFX component, and helps you to build your css rules. A very simple example : make a Button rounded Let’s take a very simple example:you would like to customize your Buttons to make them rounded. First, enable the CSS Analyzer, using the ‘View->Show CSS Analyzer’ menu. Grow the main window, and the CSS Analyzer to get more room: Then, drop a Button from the Library to the ContentView: the CSS Analyzer is now showing the Button css properties: As you can see, there is a ‘-fx-background-radius’ css property that allow to define the radius of the background (note that you can get the associated css documentation by clicking on the property name). You can then experiment this by setting the Button style property from the inspector: As you can see in the css doc, one can set the same radius for the 4 corners by a simple number. Once the style value is applied, the Button is now rounded, as expected.Look at the CSS Analyzer: the ‘-fx-background-radius’ property has now 2 entries: the default one, and the one we just entered from the Style property. The new value “win”: it overrides the default one, and become the actual value (to highlight this, the cell background becomes blue). Now, you will certainly prefer to apply this new style to all the Buttons of your FXML document, and have a css rule for this.To do this, save you document first, and create a css file in the same directory than the new document.Create an empty css file (e.g. test.css), and attach it the the root AnchorPane, by first selecting the AnchorPane, then using the Stylesheets editor from the inspector: Add the corresponding css rule to your new test.css file, from your preferred editor (Netbeans for me ;-) and save it. .button { -fx-background-radius: 10px;} Now, select your Button and have a look at the CSS Analyzer. As you can see, the Button is inheriting the css rule (since the Button is a child of the AnchorPane), and still have its inline Style. The Inline style “win”, since it has precedence on the stylesheet. The CSS Analyzer columns are displayed by precedence order.Note the small right-arrow icons, that allow to jump to the source of the value (either test.css, or the inspector in this case).Of course, unless you want to set a specific background radius for this particular Button, you can remove the inline Style from the inspector. Changing the color of a TitledPane arrow In some cases, it can be useful to be able to select the inner element you want to style directly from the Content View . Drop a TitledPane to the Content View. Then select from the CSS Analyzer the CSS cursor (the other cursor on the left allow you to come back to ‘standard’ selection), that will allow to select an inner element: height: 62px;" align="LEFT" border="0"> … and select the TitledPane arrow, that will get a yellow background: … and the Styleable Path is updated: To define a new css rule, you can first copy the Styleable path : .. then paste it in your test.css file. Then, add an entry to set the -fx-background-color to red. You should have something like: .titled-pane:expanded .title .arrow-button .arrow { -fx-background-color : red;} As soon as the test.css is saved, the change is taken into account in Scene Builder. You can also use the Styleable Path to discover all the inner elements of TitledPane, by clicking on the arrow icon: More details You can see the CSS Analyzer in action (and many other features) from the Java One BOF: BOF4279 - In-Depth Layout and Styling with the JavaFX Scene Builder presented by my colleague Jean-Francois Denise. On the right hand, click on the Media link to go to the video (streaming) of the presa. The Scene Builder support of CSS starts at 9:20 The CSS Analyzer presentation starts at 12:50

    Read the article

  • Allowing Access to HttpContext in WCF REST Services

    - by Rick Strahl
    If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful: [OperationContract] [WebGet] public string HelloWorld() { var properties = OperationContext.Current.IncomingMessageProperties; var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty; string queryString = property.QueryString; var name = StringUtils.GetUrlEncodedKey(queryString,"Name"); return "Hello World " + name; } And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value. It’s a heck of a lot easier to just do this: [OperationContract] [WebGet] public string HelloWorld() { var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty; return "Hello World " + name; } Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome. To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file. Add aspNetCompatibiltyEnabled in web.config Fist you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request. <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Markup your Service Implementation with AspNetCompatibilityRequirements Attribute Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute: [ServiceContract(Namespace = "RateTestService")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestRateTestProxyService Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests. Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object. When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently: public HttpContext Context { get { return HttpContext.Current; } } public HttpRequest Request { get { return HttpContext.Current.Request; } } While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary. Should you or shouldn’t you? WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service. At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services. It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  WCF  

    Read the article

  • The Case of the Extra Page: Rendering Reporting Services as PDF

    - by smisner
    I had to troubleshoot a problem with a mysterious extra page appearing in a PDF this week. My first thought was that it was likely to caused by one of the most common problems that people encounter when developing reports that eventually get rendered as PDF is getting blank pages inserted into the PDF document. The cause of the blank pages is usually related to sizing. You can learn more at Understanding Pagination in Reporting Services in Books Online. When designing a report, you have to be really careful with the layout of items in the body. As you move items around, the body will expand to accommodate the space you're using and you might eventually tighten everything back up again, but the body doesn't automatically collapse. One of my favorite things to do in Reporting Services 2005 - which I dubbed the "vacu-pack" method - was to just erase the size property of the Body and let it auto-calculate the new size, squeezing out all the extra space. Alas, that method no longer works beginning with Reporting Services 2008. Even when you make sure the body size is as small as possible (with no unnecessary extra space along the top, bottom, left, or right side of the body), it's important to calculate the body size plus header plus footer plus the margins and ensure that the calculated height and width do not exceed the report's height and width (shown as the page in the illustration above). This won't matter if users always render reports online, but they'll get extra pages in a PDF document if the report's height and width are smaller than the calculate space. Beginning the Investigation In the situation that I was troubleshooting, I checked the properties: Item Property Value Body Height 6.25in   Width 10.5in Page Header Height 1in Page Footer Height 0.25in Report Left Margin 0.1in   Right Margin 0.1in   Top Margin 0.05in   Bottom Margin 0.05in   Page Size - Height 8.5in   Page Size - Width 11in So I calculated the total width using Body Width + Left Margin + Right Margin and came up with a value of 10.7 inches. And then I calculated the total height using Body Height + Page Header Height + Page Footer Height + Top Margin + Bottom Margin and got 7.6 inches. Well, page sizing couldn't be the reason for the extra page in my report because 10.7 inches is smaller than the report's width of 11 inches and 7.6 inches is smaller than the report's height of 8.5 inches. I had to look elsewhere to find the culprit. Conducting the Third Degree My next thought was to focus on the rendering size of the items in the report. I've adapted my problem to use the Adventure Works database. At the top of the report are two charts, and then below each chart is a rectangle that contains a table. In the real-life scenario, there were some graphics present as a background for the tables which fit within the rectangles that were about 3 inches high so the visual space of the rectangles matched the visual space of the charts - also about 3 inches high. But there was also a huge amount of white space at the bottom of the page, and as I mentioned at the beginning of this post, a second page which was blank except for the footer that appeared at the bottom. Placing a textbox beneath the rectangles to see if they would appear on the first page resulted the textbox's appearance on the second page. For some reason, the rectangles wanted a buffer zone beneath them. What's going on? Taking the Suspect into Custody My next step was to see what was really going on with the rectangle. The graphic appeared to be correctly sized, but the behavior in the report indicated the rectangle was growing. So I added a border to the rectangle to see what it was doing. When I added borders, I could see that the size of each rectangle was growing to accommodate the table it contains. The rectangle on the right is slightly larger than the one on the left because the table on the right contains an extra row. The rectangle is trying to preserve the whitespace that appears in the layout, as shown below. Closing the Case Now that I knew what the problem was, what could I do about it? Because of the graphic in the rectangle (not shown), I couldn't eliminate the use of the rectangles and just show the tables. But fortunately, there is a report property that comes to the rescue: ConsumeContainerWhitespace (accessible only in the Properties window). I set the value of this property to True. Problem solved. Now the rectangles remain fixed at the configured size and don't grow vertically to preserve the whitespace. Case closed.

    Read the article

  • The Unspoken - The Why of GC Ergonomics

    - by jonthecollector
    Do you use GC ergonomics, -XX:+UseAdaptiveSizePolicy, with the UseParallelGC collector? The jist of GC ergonomics for that collector is that it tries to grow or shrink the heap to meet a specified goal. The goals that you can choose are maximum pause time and/or throughput. Don't get too excited there. I'm speaking about UseParallelGC (the throughput collector) so there are definite limits to what pause goals can be achieved. When you say out loud "I don't care about pause times, give me the best throughput I can get" and then say to yourself "Well, maybe 10 seconds really is too long", then think about a pause time goal. By default there is no pause time goal and the throughput goal is high (98% of the time doing application work and 2% of the time doing GC work). You can get more details on this in my very first blog. GC ergonomics The UseG1GC has its own version of GC ergonomics, but I'll be talking only about the UseParallelGC version. If you use this option and wanted to know what it (GC ergonomics) was thinking, try -XX:AdaptiveSizePolicyOutputInterval=1 This will print out information every i-th GC (above i is 1) about what the GC ergonomics to trying to do. For example, UseAdaptiveSizePolicy actions to meet *** throughput goal *** GC overhead (%) Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) Tenuring threshold: (attempted to decrease to balance GC costs) = 1 GC ergonomics tries to meet (in order) Pause time goal Throughput goal Minimum footprint The first line says that it's trying to meet the throughput goal. UseAdaptiveSizePolicy actions to meet *** throughput goal *** This run has the default pause time goal (i.e., no pause time goal) so it is trying to reach a 98% throughput. The lines Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) say that we're currently spending about 16% of the time doing young GC's and about 5% of the time doing full GC's. These percentages are a decaying, weighted average (earlier contributions to the average are given less weight). The source code is available as part of the OpenJDK so you can take a look at it if you want the exact definition. GC ergonomics is trying to increase the throughput by growing the heap (so says the "attempted to grow"). The last line Tenuring threshold: (attempted to decrease to balance GC costs) = 1 says that the ergonomics is trying to balance the GC times between young GC's and full GC's by decreasing the tenuring threshold. During a young collection the younger objects are copied to the survivor spaces while the older objects are copied to the tenured generation. Younger and older are defined by the tenuring threshold. If the tenuring threshold hold is 4, an object that has survived fewer than 4 young collections (and has remained in the young generation by being copied to the part of the young generation called a survivor space) it is younger and copied again to a survivor space. If it has survived 4 or more young collections, it is older and gets copied to the tenured generation. A lower tenuring threshold moves objects more eagerly to the tenured generation and, conversely a higher tenuring threshold keeps copying objects between survivor spaces longer. The tenuring threshold varies dynamically with the UseParallelGC collector. That is different than our other collectors which have a static tenuring threshold. GC ergonomics tries to balance the amount of work done by the young GC's and the full GC's by varying the tenuring threshold. Want more work done in the young GC's? Keep objects longer in the survivor spaces by increasing the tenuring threshold. This is an example of the output when GC ergonomics is trying to achieve a pause time goal UseAdaptiveSizePolicy actions to meet *** pause time goal *** GC overhead (%) Young generation: 20.74 (no change) Tenured generation: 31.70 (attempted to shrink) The pause goal was set at 50 millisecs and the last GC was 0.415: [Full GC (Ergonomics) [PSYoungGen: 2048K-0K(26624K)] [ParOldGen: 26095K-9711K(28992K)] 28143K-9711K(55616K), [Metaspace: 1719K-1719K(2473K/6528K)], 0.0758940 secs] [Times: user=0.28 sys=0.00, real=0.08 secs] The full collection took about 76 millisecs so GC ergonomics wants to shrink the tenured generation to reduce that pause time. The previous young GC was 0.346: [GC (Allocation Failure) [PSYoungGen: 26624K-2048K(26624K)] 40547K-22223K(56768K), 0.0136501 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] so the pause time there was about 14 millisecs so no changes are needed. If trying to meet a pause time goal, the generations are typically shrunk. With a pause time goal in play, watch the GC overhead numbers and you will usually see the cost of setting a pause time goal (i.e., throughput goes down). If the pause goal is too low, you won't achieve your pause time goal and you will spend all your time doing GC. GC ergonomics is meant to be simple because it is meant to be used by anyone. It was not meant to be mysterious and so this output was added. If you don't like what GC ergonomics is doing, you can turn it off with -XX:-UseAdaptiveSizePolicy, but be pre-warned that you have to manage the size of the generations explicitly. If UseAdaptiveSizePolicy is turned off, the heap does not grow. The size of the heap (and the generations) at the start of execution is always the size of the heap. I don't like that and tried to fix it once (with some help from an OpenJDK contributor) but it unfortunately never made it out the door. I still have hope though. Just a side note. With the default throughput goal of 98% the heap often grows to it's maximum value and stays there. Definitely reduce the throughput goal if footprint is important. Start with -XX:GCTimeRatio=4 for a more modest throughput goal (%20 of the time spent in GC). A higher value means a smaller amount of time in GC (as the throughput goal).

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • About Me

    - by Jeffrey West
    I’m new to blogging.  This is the second blog post that I have written, and before I go too much further I wanted the readers of my blog to know a bit more about me… Kid’s Stuff By trade, I am a programmer (or coder, developer, engineer, architect, etc).  I started programming when I was 12 years old.  When I was 7, we got our first ‘family’ computer – an Apple IIc.  It was great to play games on, and of course what else was a 7-year-old going to do with it.  I did have one problem with it, though.  When I put in my 5.25” floppy to play a game, sometimes, instead loading my game I would get a mysterious ‘]’ on the screen with a flashing cursor.  This, of course, was not my game.  Much like the standard ‘Microsoft fix’ is to reboot, back then you would take the floppy out, shake it, and restart the computer and pray for a different result. One day, I learned at school that I could topple my nemesis – the ‘]’ and flashing cursor – by typing ‘load’ and pressing enter.  Most of the time, this would load my game and then I would get to play.  Problem solved.  However, I began to wonder – what else can I make it do? When I was in 5th grade my dad got a bright idea to buy me a Tandy 1000HX.  He didn’t know what I was going to do with it, and neither did I.  Least of all, my mom wasn’t happy about buying a 5th grader a $1,000 computer.  Nonetheless, Over time, I learned how to write simple basic programs out of the back of my Math book: 10 x=5 20 y=6 30 PRINT x+y That was fun for all of about 5 minutes.  I needed more – more challenges, more things that I could make the computer do.  In order to quench this thirst my parents sent me to National Computer Camps in Connecticut.  It was one of the best experiences of my childhood, and I spent 3 weeks each summer after that learning BASIC, Pascal, Turbo C and some C++.  There weren’t many kids at the time who knew anything about computers, and lets just say my knowledge of and interest in computers didn’t score me many ‘cool’ points.  My experiences at NCC set me on the path that I find myself on now, and I am very thankful for the experience.  Real Life I have held various positions in the past at different levels within the IT layer cake.  I started out as a Software Developer for a startup in the Dallas, TX area building software for semiconductor testing statistical process control and sampling.  I was the second Java developer that was hired, and the ninth employee overall, so I got a great deal of experience developing software.  Since there weren’t that many people in the organization, I also got a lot of field experience which meant that if I screwed up the code, I got yelled at (figuratively) by both my boss AND the customer.  Fun Times!  What made it better was that I got to help run pilot programs in Taiwan, Singapore, Malaysia and Malta.  Getting yelled at in Taiwan is slightly less annoying that getting yelled at in Dallas… I spent the next 5 years at Accenture doing systems integration in the ‘SOA’ group.  I joined as a Consultant and left as a Senior Manager.  I started out writing code in WebLogic Integration and left after I wrapped up project where I led a team of 25 to develop the next generation of a digital media platform to deliver HD content in a digital format.  At Accenture, I had the pleasure of working with some truly amazing people – mentoring some and learning from many others – and on some incredible real-world IT projects.  Given my background with the BEA stack of products I was often called in to troubleshoot and tune WebLogic, ALBPM and ALSB installations and have logged many hours digging through thread dumps, running performance tests with SoapUI and decompiling Java classes we didn’t have the source for so I could see what was going on in the code. I am now a Senior Principal Product Manager at Oracle in the Application Grid practice.  The term ‘Application Grid’ refers to a collection of software and hardware products within Oracle that enables customers to build horizontally scalable systems.  This collection of products includes WebLogic, GlassFish, Coherence, Tuxedo and the JRockit/HotSpot JVMs (HotSprocket, maybe?).  Now, with the introduction of Exalogic it has grown to include hardware as well. Wrapping it up… I love technology and have a diverse background ranging from software development to HW and network architecture & tuning.  I have held certifications for being an Oracle Certified DBA, MSCE and Cisco Certified Network Professional (CCNP), among others and I have put those to great use over my career.  I am excited about programming & technology and I enjoy helping people learn and be successful.  If you are having challenges with WebLogic, BPM or Service Bus feel free to reach out to me and I’ll be happy to help as I have time. Thanks for stopping by!   --Jeff

    Read the article

  • LA SPÉCIALISATION POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ

    - by michaela.seika(at)oracle.com
    Software. Hardware. Complete. inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ --     LA SPÉCIALISATION  POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ       Le marché nous demande de plus en plus de solutions et d’engagements. Pour bâtir ces solutions nous nous appuyons sur vous, Partenaires Oracle. En matière d’engagements, Oracle se doit de communiquer auprès du marché quant à la spécialisation de ses partenaires, sur leurs compétences en fonction des projets que les clients nous demandent d’adresser. Plus de 50 spécialisations sont à ce jour disponibles pour les partenaires Gold, Platinum et Diamond : • Sur les produits Technologiques tels que la Base de Données, les options de la Base, la SOA, la Business Intelligence, … • Sur les produits Applicatifs, tels que l’ERP, le CRM, … • Sur les produits Hardware, les Systèmes d’exploitation. Afin de vous aider à vous spécialiser et donc à vous certifier, nos 2 distributeurs à valeur ajoutée, Altimate et Arrow ECS, vous assistent dans cette démarche. ALTIMATE vous propose de participer Lunch & Spécialisation tour Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. ARROW ECS vous propose de participer : L'Ecole de la spécialisation Oracle by Arrow Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. Oracle Solutions Tour Découvrez la solution Oracle lors de ce tour de France. Au programme :  roadmaps, ateliers produits et solutions, certifications     BÉNÉFICES en savoir + • l’engagement d’Oracle aux côtés des partenaires pour adresser les grands dossiers • la visibilité auprès des clients pour être identifié comme Expert sur une offre, reconnu et validé par Oracle • le support (accès support gratuit), Oracle University (vouchers pour certifier gratuitement vos équipes de Consultants Implementation) • les budgets Marketing (lead generation, création de campagnes Marketing, être sponsor d’événements clients)   Différenciez-vous en vous spécialisant sur votre domaine d’expertise et accélérez votre succès ! Oracle et ses Distributeurs à Valeur Ajoutée     Eric Fontaine Directeur Alliances & Channel Technologie Europe du Sud vous présente en vidéo la spécialisation et ses avantages.                                         CONTACTS : ORACLE Jean-Jacques PanissiéOracle Partner Development A&C Technology +33 157 60 28 52 ALTIMATE Sophie Daval +33 1 34 58 47 68 ARROW [email protected] +33 1 49 97 59 63          

    Read the article

  • Oracle participe au Forum MDM - 24 octobre 2013

    - by Louisa Aggoune
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Dédié aux Directions Générales, Fonctionnelles et Informatiques ce 2ème Forum MDM (Master Data Management) a pour objectif de présenter les dernières nouveautés et le savoir-faire des acteurs majeurs du marché sous la forme d'ateliers. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ORACLE, sponsor de l'évènement, vous convie sur son stand et à son atelier: Oracle MDM et qualité des données au service de la gouvernance de vos données internes et externes ou comment enrichir et fédérer votre patrimoine informationnel avec le Social data, Big Data, Cloud data … De nombreux retours d’expérience concrets vous seront également délivrés lors de cette matinée, à travers les témoignages de : - Philippe Kirady, CARDIF, Directeur expertise métier et process- Michelle Martin, SOLVAY, Directrice Data Management- Thierry Chamfrault, TECHNIP, Directeur Qualité et méthodes IT- Manuel Amorim, LE PARISIEN, Responsable Audience et Fidélisation Département Numérique- Jean-Michel Collomb, Architecte d’entreprise, AMADEUS Global Business ServicesVous avez des problématiques de :- Gouvernance des données- Qualité des données- Référentiels de données, produits, clients, fournisseurs, RH Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Lieu : Jeudi 24 Octobre 2013, de 8h30 à 13h30 - Centre de Conférences Paris Victoire - 52, rue de la Victoire - 75009 Paris Inscription: Par retour d'email à Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} [email protected] ou [email protected]

    Read the article

  • Five development tools I can't live without

    - by bconlon
    When applying to join Geeks with Blogs I had to specify the development tools I use every day. That got me thinking, it's taken a long time to whittle my tools of choice down to the selection I use, so it might be worth sharing. Before I begin, I appreciate we all have our preferred development tools, but these are the ones that work for me. Microsoft Visual Studio Microsoft Visual Studio has been my development tool of choice for more years than I care to remember. I first used this when it was Visual C++ 1.5 (hats off to those who started on 1.0) and by 2.2 it had everything I needed from a C++ IDE. Versions 4 and 5 followed and if I had to guess I would expect more Windows applications are written in VC++ 6 and VB6 than any other language. Then came the not so great versions Visual Studio .Net 2002 (7.0) and 2003 (7.1). If I'm honest I was still using v6. 2005 was better and 2008 was simply brilliant. Everything worked, the compiler was super fast and I was happy again...then came 2010...oh dear. 2010 is a big step backwards for me. It's not encouraging for my upcoming WPF exploits that 2010 is fronted in WPF technology, with the forever growing Find/Replace dialog, the issues with C++ intellisense, and the buggy debugger. That said it is still my tool of choice but I hope they sort the issue in SP1. I've tried other IDEs like Visual Age and Eclipse, but for me Visual Studio is the best. A really great tool. Liquid XML Studio XML development is a tricky business. The W3C standards are often difficult to get to the bottom of so it's great to have a graphical tool to help. I first used Liquid Technologies 5 or 6 years back when I needed to process XML data in C++. Their excellent XML Data Binding tool has an easy to use Wizard UI (as compared to Castor or JAXB command line tools) and allows you to generate code from an XML Schema. So instead of having to deal with untyped nodes like with a DOM parser, instead you get an Object Model providing a custom API in C++, C#, VB etc. More recently they developed a graphical XML IDE with XML Editor, XSLT, XQuery debugger and other XML tools. So now I can develop an XML Schema graphically, click a button to generate a Sample XML document, and click another button to run the Wizard to generate code including a Sample Application that will then load my Sample XML document into the generated object model. This is a very cool toolset. Note: XML Data Binding is nothing to do with WPF Data Binding, but I hope to cover both in more detail another time. .Net Reflector Note: I've just noticed that starting form the end of February 2011 this will no longer be a free tool !! .Net Reflector turns .Net byte code back into C# source code. But how can it work this magic? Well the clue is in the name, it uses reflection to inspect a compiled .Net assembly. The assembly is compiled to byte code, it doesn't get compiled to native machine code until its needed using a just-in-time (JIT) compiler. The byte code still has all of the information needed to see classes, variables. methods and properties, so reflector gathers this information and puts it in a handy tree. I have used .Net Reflector for years in order to understand what the .Net Framework is doing as it sometimes has undocumented, quirky features. This really has been invaluable in certain instances and I cannot praise enough kudos on the original developer Lutz Roeder. Smart Assembly In order to stop nosy geeks looking at our code using a tool like .Net Reflector, we need to obfuscate (mess up) the byte code. Smart Assembly is a tool that does this. Again I have used this for a long time. It is very quick and easy to use. Another excellent tool. Coincidentally, .Net Reflector and Smart Assembly are now both owned by Red Gate. Again kudos goes to the original developer Jean-Sebastien Lange. TortoiseSVN SVN (Apache Subversion) is a Source Control System developed as an open source project. TortoiseSVN is a graphical UI wrapper over SVN that hooks into Windows Explorer to enable files to be Updated, Committed, Merged etc. from the right click menu. This is an essential tool for keeping my hard work safe! Many years ago I used Microsoft Source Safe and I disliked CVS type systems. But TortoiseSVN is simply the best source control tool I have ever used. --- So there you have it, my top 5 development tools that I use (nearly) every day and have helped to make my working life a little easier. I'm sure there are other great tools that I wish I used but have never heard of, but if you have not used any of the above, I would suggest you check them out as they are all very, very cool products. #

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • Firebug not breaking on errors

    - by stormdrain
    Hi. I reinstalled Firefox today, because... whatever. I reinstalled firebug, therefully, and now when I try to use it, it's all different. I believe it is the same version I had before. In fact, I even went digging through my trash and replaced the new firebug with the one I removed with the old Firefox. They ended up being the same version (1.5.3). My issue is, when I have an error in my script somewhere, it used to be that if on the script pane of firebug, the script would break on the error, and the script page would go to the offending line, highlighted, and all was right with the world. Now, it logs the error in the console, and that's it. I've spent the better part of the last hour trying to convince myself this isn't worth an ulcer; I am losing the battle, though. I've searched Google, put ads on Craigslist, even thought about becoming a cop. There were some examples on the Firebug dox, but none of them helped. A bunch of old references to a mysterious (break on all errors) option; an option which I think I might have set by accident--there is a little red circle-slash on my pause button (that's what she said), but there the script continues, all on its own. There was a guide somewhere on the firebug pages that spoke of setting the breakpoint next to the error in the console. I, however, don't have this option for some reason. The line of code is there in the console, but no breakpoint button next to it. This, however, would not be ideal even if it worked. I liked it when I could have the script page open, and if there were errors it would jump to that line. I could try to fix it, and re-load the page. If that line was fixed, GREAT, on to the next error on the page -- which would be highlighted and ready. I would like to offer a solicitation of help. Help.

    Read the article

< Previous Page | 15 16 17 18 19 20 21  | Next Page >