Search Results

Search found 344 results on 14 pages for 'jamie stevens'.

Page 13/14 | < Previous Page | 9 10 11 12 13 14  | Next Page >

  • Ralink RT3060 wireless device configuration on ubuntu 12.04

    - by Stephan
    concerning How do I get a Ralink RT3060 wireless card working? I'm running Ubuntu 12.04 with a 'LWPX07 Edimax EW-7711In 150M 1T1R WL PCI Card' which has a rt3060 chip. Out of the box the card is recognized as rt2800sta. I tried solution one, that didn't work. Still the card connects to the wireless network, but it seems to slow to load any pages. Then I tried solution 2, but then the network-manager doesn't see any wireless device. $ iwconfig lo no wireless extensions. ra0 Ralink STA Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 invalid crypt:0 invalid misc:0 eth0 no wireless extensions. $ lsmod Module Size Used by rt3562sta 882296 0 $ lspci -v 05:02.0 Network controller: Ralink corp. RT3060 Wireless 802.11n 1T/1R Subsystem: Edimax Computer Co. Device 7711 Flags: bus master, slow devsel, latency 64, IRQ 23 Memory at ff9f0000 (32-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: rt2860 Kernel modules: rt3562sta, rt2800pci Am I missing a configuration step? How do I tell the network card which driver to use? Thanks in advance Stephan I found the problem. As described in stevens blog http://steveswinsburg.wordpress.com/2011/03/12/how-to-install-a-d-link-dwa-525-wireless-network-card-in-ubuntu-10-04/ sudo su make && make install "You need to use ‘sudo su’ and not just ‘sudo’ so it creates the directories properly." That is the problem with the solution describe above.

    Read the article

  • Any software transforming broken lines into curves?

    - by user32931
    Hello, do you know of any software that would help me transform a broken line into a curved line? For example, I have an octagon or a heptagon and I want it to be transformed into something resembling a circle. if you know such software, please, let me know. Thank You! Update A: Here is an image from the tutorial given to me by Jamie Keeling (right now it's the first answer below). At least the picture there represents what I want. In that tutorial this process is called "flattening paths". I will try to put that image right here, but if it doesn't get displayed, you can find it by this URL: http://msdn.microsoft.com/en-us/library/ms536364%28v=VS.85%29.aspx The red line in the picture is what I would want to submit, and the blue line is what I would want to get in the end:

    Read the article

  • Server 2008 Hard Faults

    - by claw
    Hey all, plase bear with me as I haven't looked at a server in a very long time. The problem I am having is with a Windows 2008 Standard FE Service Pack 2 Intel Xeon X3430 @ 2.40 2.39 GHZ 4 GB Memory 64 Bit There seems to be no problems other than the physical memory peaking at 91%, always with over 100 Hard Faults Per Second. To my understanding hard faults should be fairly rare on a machine with. Are there any logs I can show you? Or investigate myself. The general performance of the machine is ok, i can access SBS2008 and change settings fairly smoothly without hangs etc. However, we connect to the server and do quite a bit of SQL via an application. For a record to retrieve say 20 rows, it can take 20+ seconds. Thanks in advance, Jamie EDIT: What the server is used for: IIS ASP Web Service SQL 2008 List item Exchange unable to upload screenshots due to low reputation - why doesnt my SO work here :)

    Read the article

  • Server 2008 Hard Faults

    - by claw
    Hey all, plase bear with me as I haven't looked at a server in a very long time. The problem I am having is with a Windows 2008 Standard FE Service Pack 2 Intel Xeon X3430 @ 2.40 2.39 GHZ 4 GB Memory 64 Bit There seems to be no problems other than the physical memory peaking at 91%, always with over 100 Hard Faults Per Second. To my understanding hard faults should be fairly rare on a machine with. Are there any logs I can show you? Or investigate myself. The general performance of the machine is ok, i can access SBS2008 and change settings fairly smoothly without hangs etc. However, we connect to the server and do quite a bit of SQL via an application. For a record to retrieve say 20 rows, it can take 20+ seconds. Thanks in advance, Jamie EDIT: What the server is used for: IIS ASP Web Service SQL 2008 List item Exchange unable to upload screenshots due to low reputation - why doesnt my SO work here :)

    Read the article

  • glassfish v3 - update all pakages via command line on linux

    - by orange80
    Does anyone know how to do this? I just want a one-shot command to "update everything" over the command line? This is for a remote server so it must be over the command line. I used: $ sudo pkg list -u to see the list of packages that are out of date, but I cannot for the life of me figure out how to say, "ok, update them". I have scoured the web for clues, but to no avail :( This is classic Sun Solaris-type patching that is the exact reason I am now on Linux. Please help!! Thanks :) Jamie

    Read the article

  • What's the term for re-implementing an old API in terms of a newer API

    - by dodgy_coder
    The reason for doing it is to solve the case when a newer API is no longer backwards compatible with an older API. To explain, just say that there is an old API v1.0. The maker of this API decides it is broken and works on a new API v1.1 that intentionally breaks compatibility with the old API v1.0. Now, any programs written against the old API cannot be recompiled as-is with the new API. Then lets say there is a large app written against the old API and the developer doesn't have access to the source code. A solution would be to re-implement a "custom" old API v1.0 in terms of the new API v1.1 calls. So the "custom" v1.0 API is actually keeping the same interface/methods as the v1.0 API but inside its implementation it is actually making calls to the new API v1.1 methods. So the large app can be then compiled and linked against the "custom" v1.0 API and the new v1.1 API without any major source code changes. Is there a term for this practice? There's a recent example of this happening in Jamie Zawinski's port of XScreenSaver to the iPhone - he re-implemented the OpenGL 1.3 API in terms of the OpenGL ES 1.1 API. In this case, OpenGL 1.3 represents the "old" API and OpenGL ES 1.1 represents the "new" API.

    Read the article

  • Should we be able to deploy a single package to the SSIS Catalog?

    - by jamiet
    My buddy Sutha Thiru sent me an email recently asking about my opinion on a particular nuance of the project deployment model in SQL Server Integration Services (SSIS) 2012 and I’d like to share my response as I think it warrants a wider discussion. Sutha asked: Jamie What is your take on this? http://www.mattmasson.com/index.php/2012/07/can-i-deploy-a-single-ssis-package-from-my-project-to-the-ssis-catalog/ Overnight I was talking to Matt who confirmed that they got no plans to change the deployment model. For example if we have following scenrio how do we do deploy? Sprint 1 Pkg1, 2 & 3 has been developed and deployed to UAT. Once signed off its been deployed to Live. Sprint 2 Pkg 4 & 5 been developed. During this time users raised a bug on Pkg2. We want to make the change to Pkg2 and deploy that to UAT and eventually to LIVE without releasing Pkg 4 &5. How do we do it? Matt pointed me to his blog entry which I have seen before . http://www.mattmasson.com/index.php/2012/02/thoughts-on-branching-strategies-for-ssis-projects/ Thanks Sutha My response: Personally, even though I've experienced the exact problem you just outlined, I agree with the current approach. I steadfastly believe that there should not be a way for an unscrupulous developer to slide in a new version of a package under the covers. Deploying .ispac files brings a degree of rigour to your operational processes. Yes, that means that we as SSIS developers are going to have to get better at using source control and branching properly but that is no bad thing in my opinion. Claiming to be proper "developers" is a bit of a cheap claim if we don't even do the fundamentals correctly. I would be interested in the thoughts of others who have used the project deployment model. Do you agree with my point of view? @Jamiet

    Read the article

  • Import SSIS Project in Denali CTP1

    For years Analysis Services has had the ability to take an existing database from a server and reverse engineer it into a BIDS project.  This is extremely useful when all you have is the running instance of the database and the project that created it has long since disappeared.  Reverse engineering has never been a feature of SSIS until now. Let me walk you through the simple steps. The first step is that you obviously have to have a project deployed to an SSIS Catalog.  I will do a video on this soon but in case you can’t wait then my good buddy Jamie Thomson has written it up here As you can see I have a project called imaginatively “Denali1” with one package “Package.dtsx” The next thing we need to do is fire up BIDS and choose the right project type (Integration Services Import Project) Now we just follow the wizard.  We make sure we specify on which server to find the Catalog and in which folder to look for the project. Next the setting are validated and we are greeted with the familiar review screen before the creation of our new project from the deployed project happens Hit Import and away we go The result is just what we wanted.

    Read the article

  • Looking into Enum Support in Entity Framework 5.0 Code First

    - by nikolaosk
    In this post I will show you with a hands-on demo the enum support that is available in Visual Studio 2012, .Net Framework 4.5 and Entity Framework 5.0. You can have a look at this post to learn about the support of multilple diagrams per model that exists in Entity Framework 5.0. We will demonstrate this with a step by step example. I will use Visual Studio 2012 Ultimate. You can also use Visual Studio 2012 Express Edition. Before I move on to the actual demo I must say that in EF 5.0 an enumeration can have the following types. Byte Int16 Int32 Int64 Sbyte Obviously I cannot go into much detail on what EF is and what it does. I will give again a short introduction.The .Net framework provides support for Object Relational Mapping through EF. So EF is a an ORM tool and it is now the main data access technology that microsoft works on. I use it quite extensively in my projects. Through EF we have many things out of the box provided for us. We have the automatic generation of SQL code.It maps relational data to strongly types objects.All the changes made to the objects in the memory are persisted in a transactional way back to the data store. You can find in this post an example on how to use the Entity Framework to retrieve data from an SQL Server Database using the "Database/Schema First" approach. In this approach we make all the changes at the database level and then we update the model with those changes. In this post you can see an example on how to use the "Model First" approach when working with ASP.Net and the Entity Framework. This model was firstly introduced in EF version 4.0 and we could start with a blank model and then create a database from that model.When we made changes to the model , we could recreate the database from the new model. You can search in my blog, because I have posted many posts regarding ASP.Net and EF. I assume you have a working knowledge of C# and know a few things about EF. The Code First approach is the more code-centric than the other two. Basically we write POCO classes and then we persist to a database using something called DBContext. Code First relies on DbContext. We create 2,3 classes (e.g Person,Product) with properties and then these classes interact with the DbContext class. We can create a new database based upon our POCOS classes and have tables generated from those classes.We do not have an .edmx file in this approach.By using this approach we can write much easier unit tests. DbContext is a new context class and is smaller,lightweight wrapper for the main context class which is ObjectContext (Schema First and Model First). Let's begin building our sample application. 1) Launch Visual Studio. Create an ASP.Net Empty Web application. Choose an appropriate name for your application. 2) Add a web form, default.aspx page to the application. 3) Now we need to make sure the Entity Framework is included in our project. Go to Solution Explorer, right-click on the project name.Then select Manage NuGet Packages...In the Manage NuGet Packages dialog, select the Online tab and choose the EntityFramework package.Finally click Install. Have a look at the picture below   4) Create a new folder. Name it CodeFirst . 5) Add a new item in your application, a class file. Name it Footballer.cs. This is going to be a simple POCO class.Place it in the CodeFirst folder. The code follows public class Footballer { public int FootballerID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public double Weight { get; set; } public double Height { get; set; } public DateTime JoinedTheClub { get; set; } public int Age { get; set; } public List<Training> Trainings { get; set; } public FootballPositions Positions { get; set; } }    Now I am going to define my enum values in the same class file, Footballer.cs    public enum FootballPositions    {        Defender,        Midfielder,        Striker    } 6) Now we need to create the Training class. Add a new class to your application and place it in the CodeFirst folder.The code for the class follows.     public class Training     {         public int TrainingID { get; set; }         public int TrainingDuration { get; set; }         public string TrainingLocation { get; set; }     }   7) Then we need to create a context class that inherits from DbContext.Add a new class to the CodeFirst folder.Name it FootballerDBContext.Now that we have the entity classes created, we must let the model know.I will have to use the DbSet<T> property.The code for this class follows       public class FootballerDBContext:DbContext     {         public DbSet<Footballer> Footballers { get; set; }         public DbSet<Training> Trainings { get; set; }     } Do not forget to add  (using System.Data.Entity;) in the beginning of the class file 8) We must take care of the connection string. It is very easy to create one in the web.config.It does not matter that we do not have a database yet.When we run the DbContext and query against it,it will use a connection string in the web.config and will create the database based on the classes. In my case the connection string inside the web.config, looks like this      <connectionStrings>    <add name="CodeFirstDBContext"  connectionString="server=.\SqlExpress;integrated security=true;"  providerName="System.Data.SqlClient"/>                       </connectionStrings>   9) Now it is time to create Linq to Entities queries to retrieve data from the database . Add a new class to your application in the CodeFirst folder.Name the file DALfootballer.cs We will create a simple public method to retrieve the footballers. The code for the class follows public class DALfootballer     {         FootballerDBContext ctx = new FootballerDBContext();         public List<Footballer> GetFootballers()         {             var query = from player in ctx.Footballers where player.FirstName=="Jamie" select player;             return query.ToList();         }     }   10) Place a GridView control on the Default.aspx page and leave the default name.Add an ObjectDataSource control on the Default.aspx page and leave the default name. Set the DatasourceID property of the GridView control to the ID of the ObjectDataSource control.(DataSourceID="ObjectDataSource1" ). Let's configure the ObjectDataSource control. Click on the smart tag item of the ObjectDataSource control and select Configure Data Source. In the Wizzard that pops up select the DALFootballer class and then in the next step choose the GetFootballers() method.Click Finish to complete the steps of the wizzard. Build your application.  11)  Let's create an Insert method in order to insert data into the tables. I will create an Insert() method and for simplicity reasons I will place it in the Default.aspx.cs file. private void Insert()        {            var footballers = new List<Footballer>            {                new Footballer {                                 FirstName = "Steven",LastName="Gerrard", Height=1.85, Weight=85,Age=32, JoinedTheClub=DateTime.Parse("12/12/1999"),Positions=FootballPositions.Midfielder,                Trainings = new List<Training>                             {                                     new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                    new Training {TrainingDuration = 2, TrainingLocation="Anfield"},                    new Training {TrainingDuration = 2, TrainingLocation="MelWood"},                }                            },                            new Footballer {                                  FirstName = "Jamie",LastName="Garragher", Height=1.89, Weight=89,Age=34, JoinedTheClub=DateTime.Parse("12/02/2000"),Positions=FootballPositions.Defender,                Trainings = new List<Training>                                             {                                 new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                new Training {TrainingDuration = 5, TrainingLocation="Anfield"},                new Training {TrainingDuration = 6, TrainingLocation="Anfield"},                }                           }                    };            footballers.ForEach(foot => ctx.Footballers.Add(foot));            ctx.SaveChanges();        }   12) In the Page_Load() event handling routine I called the Insert() method.        protected void Page_Load(object sender, EventArgs e)        {                   Insert();                }  13) Run your application and you will see that the following result,hopefully. You can see clearly that the data is returned along with the enum value.  14) You must have also a look at the database.Launch SSMS and see the database and its objects (data) created from EF Code First.Have a look at the picture below. Hopefully now you have seen the support that exists in EF 5.0 for enums.Hope it helps !!!

    Read the article

  • Any software transforming broken lines into curves?

    - by brilliant
    Hello, do you know of any software that would help me transform a broken line into a curved line? For example, I have an octagon or a heptagon and I want it to be transformed into something resembling a circle. if you know such software, please, let me know. Thank You! Update A: Here is an image from the tutorial given to me by Jamie Keeling (right now it's the first answer below). At least the picture there represents what I want. In that tutorial this process is called "flattening paths". I will try to put that image right here, but if it doesn't get displayed, you can find it here: http://msdn.microsoft.com/en-us/library/ms536364%28v=VS.85%29.aspx The red line in the picture is what I would want to submit, and the blue line is what I would want to get in the end:

    Read the article

  • Get multiple records with one query

    - by Lewy
    User table: name lastname Bob Presley Jamie Cox Lucy Bush Find users q = Query.new("Bob Presley, Cox, Lucy") q.find_users => {0=>{:name=>"Bob", :lastname=>"Presley"}, 1=>{:lastname=>"Cox"}, 2=>{:name=>"Lucy"}} Question: I've got hash with few names and lastnames. I need to build Activerecord query to fetch all users from that hash. I can do object = [] hash = q.find_users hash.each do |data| #check if data[:lastname] and data[:name] exist # object << User.where(:name => ..., :lastname => ...) end But I think it is higly inefficient. How should I do this ?

    Read the article

  • Releasing NSData causes exception...

    - by badmanj
    Hi, Can someone please explain why the following code causes my app to bomb? NSData *myImage = UIImagePNGRepresentation(imageView.image); : [myImage release]; If I comment out the 'release' line, the app runs... but a few times calling the function containing this code and I get a crash - I guess caused by a memory leak. Even if I comment EVERYTHING else in the function out and just leave those two lines, when the release executes, the app crashes. I'm sure this must be a newbie "you don't know how to clean up your mess properly" kind of thing ;-) Cheers, Jamie.

    Read the article

  • Move a sequential set of commits from one (local) branch to another

    - by jpswain09
    Is there a way to move a sequential set of commits from one (local) branch to another? I have searched quite a bit and still haven't found a clear answer for what I want to do. For example, say I have this: master A---B---C \ feature-1 M---N---O---P---R---Q And I have decided that the last 3 commits would be better off like this: master A---B---C \ feature-1 M---N---O \ f1-crazy-idea P---R---Q I know I can do this, and it does work: $ git log --graph --pretty=oneline (copying down sha-1 ID's of P, R, Q) $ git checkout feature-1 $ git reset --hard HEAD^^^ $ git checkout -b f1-crazy-idea $ git cherry-pick <P sha1> $ git cherry-pick <R sha1> $ git cherry-pick <Q sha1> I was hoping that instead there would be a more concise way to do this, possibly with git rebase, although I haven't had much luck. Any insight would be greatly appreciated. Thanks, Jamie

    Read the article

  • the creation date of a database

    - by Robert Merkwürdigeliebe
    This is a question that originated from this question by Jamie. I thought I'll check Dougman's answer. What's happening here? A restore? select created from dba_users where username = 'SYS'; select min(created) FROM dba_objects; select created from v$database; CREATED ------------------------- 10-SEP-08 11:24:44 MIN(CREATED) ------------------------- 10-SEP-08 11:24:28 CREATED ------------------------- 18-DEC-09 15:49:00 Created from v$database is more than one year later than creation date of user SYS and SYS' objects.

    Read the article

  • Possible to sort via two time stamps and display same row twice.

    - by jamiethompson90
    I'm looking at creating a time table solution. I have a task sheet that looks like Area 1 item 1 startTime endTime Area 1 item1 startTime endTime I wish to create a display where I can view what even is happening next, either endTime or startTime i.e. Newcastle reel 16:45 18:45 Newcastle reel2 17:45 19:45 would output Newcastle reel 16:45 Newcastle reel 17:45 Newcastle reel 18:45 Newcastle reel 19:45 More so, I would like to detect if the time is a startTime or an endTime would I have to enter two rows for each activity (time,area,item, start|end). I can make the interface to the creation of two rows. I just wondered if there was a better solution. Any input is appreciated, Thanks, Jamie

    Read the article

  • tcp msl timeout

    - by iamrohitbanga
    The following is given in the book TCP IP Illustrated by Stevens Quiet Time Concept The 2MSL wait provides protection against delayed segments from an earlier incarnation of a connection from being interpreted as part of a new connection that uses the same local and foreign IP addresses and port numbers. But this works only if a host with connections in the 2MSL wait does not crash. What if a host with ports in the 2MSL wait crashes, reboots within MSL seconds, and immediately establishes new connections using the same local and foreign IP addresses and port numbers corresponding to the local ports that were in the 2MSL wait before the crash? In this scenario, delayed segments from the connections that existed before the crash can be misinterpreted as belonging to the new connections created after the reboot. This can happen regardless of how the initial sequence number is chosen after the reboot. To protect against this scenario, RFC 793 states that TCP should not create any connections for MSL seconds after rebooting. This is called the quiet time Few implementations abide by this since most hosts take longer than MSL seconds to reboot after a crash. Do operating systems wait for 2MSL seconds now after a reboot before initiating a TCP connection. The boot times are also less these days. Although the ports and sequence numbers are random but is this wait implemented in Linux?

    Read the article

  • tcp msl timeout implementation in linux

    - by iamrohitbanga
    The following is given in the book TCP IP Illustrated by Stevens Quiet Time Concept The 2MSL wait provides protection against delayed segments from an earlier incarnation of a connection from being interpreted as part of a new connection that uses the same local and foreign IP addresses and port numbers. But this works only if a host with connections in the 2MSL wait does not crash. What if a host with ports in the 2MSL wait crashes, reboots within MSL seconds, and immediately establishes new connections using the same local and foreign IP addresses and port numbers corresponding to the local ports that were in the 2MSL wait before the crash? In this scenario, delayed segments from the connections that existed before the crash can be misinterpreted as belonging to the new connections created after the reboot. This can happen regardless of how the initial sequence number is chosen after the reboot. To protect against this scenario, RFC 793 states that TCP should not create any connections for MSL seconds after rebooting. This is called the quiet time Few implementations abide by this since most hosts take longer than MSL seconds to reboot after a crash. Do operating systems wait for 2MSL seconds now after a reboot before initiating a TCP connection. The boot times are also less these days. Although the ports and sequence numbers are random but is this wait implemented in Linux? Also RFC 793 says that this wait is not required if history is maintained. Does linux maintain any history of used sequence numbers for connections to handle this case?

    Read the article

  • Generate DROP statements for all extended properties

    - by jamiet
    This evening I have been attempting to migrate an existing on-premise database to SQL Azure using the wizard that is built-in to SQL Server Management Studio (SSMS). When I did so I received the following error: The following objects are not supported = [MS_Description] = Extended Property Evidently databases containing extended properties can not be migrated using this particular wizard so I set about removing all of the extended properties – unfortunately there were over a thousand of them so I needed a better way than simply deleting each and every one of them manually. I found a couple of resources online that went some way toward this: Drop all extended properties in a MSSQL database by Angelo Hongens Modifying and deleting extended properties by Adam Aspin Unfortunately neither provided a script that exactly suited my needs. Angelo’s covered extended properties on tables and columns however I had other objects that had extended properties on them. Adam’s looked more complete but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So, both great resources but I wasn’t able to use either on their own to get rid of all of my extended properties. Hence, I combined the excellent work that Angelo and Adam had provided in order to manufacture my own script which did successfully manage to generate calls to sp_dropextendedproperty for all of my extended properties. If you think you might be able to make use of such a script then feel free to download it from https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16707&parid=550F681DAD532637!16706&authkey=!APxPIQCatzC7BQ8. This script will remove extended properties on tables, columns, check constraints, default constraints, views, sprocs, foreign keys, primary keys, table triggers, UDF parameters, sproc parameters, databases, schemas, database files and filegroups. If you have any object types with extended properties on them that are not in that list then consult Adam’s aforementioned article – it should prove very useful. I repeat here the message that I have placed at the top of the script: /* This script will generate calls to sp_dropextendedproperty for every extended property that exists in your database. Actually, a caveat: I don't promise that it will catch each and every extended property that exists, but I'm confident it will catch most of them! It is based on this: http://blog.hongens.nl/2010/02/25/drop-all-extended-properties-in-a-mssql-database/ by Angelo Hongens. Also had lots of help from this: http://www.sqlservercentral.com/articles/Metadata/72609/ by Adam Aspin Adam actually provides a script at that link to do something very similar but when I ran it I got an error: Msg 468, Level 16, State 9, Line 78 Cannot resolve the collation conflict between "Latin1_General_100_CS_AS" and "Latin1_General_CI_AS" in the equal to operation. So I put together this version instead. Use at your own risk. Jamie Thomson 2012-03-25 */ Hope this is useful to someone! @Jamiet

    Read the article

  • Oracle WebCenter @ OpenWorld 2012

    - by kellsey.ruppel
    This week, we want to focus on giving our blog readers a preview of Oracle WebCenter related events and activities happening at Oracle OpenWorld this year! Today's guest post comes from Jamie Rancourt, Senior Manager of Product Management for Oracle WebCenter. Are you registered to attend OpenWorld 2012 in San Francisco from September 30 – October 4?  If not, the conference details and registration information can be found at http://oracle.com/openworld!  Here’s a brief run down of the planned activities for Oracle WebCenter at this year’s event. WebCenter Sessions This year WebCenter will be featured in 36 sessions across the following tracks: Web Experience Management, Portals, Content Management and Social Network Middleware for Enterprise Applications Financial Management Oracle ADF and Fusion Application Development Applications Tools and Technology Applications Strategy Life Sciences Customer Relationship Management Oracle RightNow CX Cloud Service Siebel Applications SOA and Business Process Management Oracle Fusion Applications Oracle Commerce Retail Social Business Cloud Computing Here are a few of the sessions to wet your appetite: Oracle WebCenter Strategy: Engaging your Customers.  Empowering your Business Oracle WebCenter Sites Strategy & Vision Oracle WebCenter Content Strategy & Vision Oracle WebCenter Portal Strategy & Vision Oracle Social Network Strategy & Vision Develop a Mobile Strategy with Oracle WebCenter: Engage Customers, Employees, and Partners Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mash-ups We also have 4 interactive customer panels planned for the event: Using Web Experience Management to Drive Online Marketing Success Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM Nirvana Becoming a Social Business: Stories from the Front Lines of Change Building Next-Generation Portals: An Interactive Customer Panel Discussion And there are many more sessions for you to attend to learn everything there is to know about Oracle WebCenter from our product experts and partners. Make sure to visit the Content Catalog for the complete session details Labs and Demos This year’s event also features 4 WebCenter hands on labs, each focusing on a different product area including Portal, Content, Sites and Social Network.  In addition to the labs, there will be 6 demos featuring Oracle WebCenter in both the Fusion Middleware and Cloud pavilions.  Make sure you stop by to see the latest demos and meet our knowledgeable product managers! And don't forget about the Oracle WebCenter Customer Appreciation Event, which is sponsored by our Partners and will take place on Tuesday, October 2nd at The Palace Hotel. Be sure to watch the blog for more information in the coming months with how to register! We look forward to seeing you at Oracle OpenWorld 2012!

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by Brian Dayton
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: ·         How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? ·         What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking ·         How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... ·         How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night.   I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise.    But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by [email protected]
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: · How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? · What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking · How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... · How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night. I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise. But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • Update a bootable OS X drive clone with rsync?

    - by Joe
    The question: is it possible to keep a boot-able backup drive clone of OS X updated with rsync? If rsync is not a viable option are there alternatives? The Setup: My situation is as shown above. One internal Samsung 840 SSD [120g] in use as my OS X 10.8 boot disk on a recent model Mac Mini. I have successfully cloned that drive with disk utility to a 125g partition of another HDD in an external USB 3 enclosure and at that point I am able to boot to it. The Goal: As my last system went out in a fiery blaze taking much valuable data with it, I have a new respect for a proper backup solution and really want to do this right. My goal is to achieve an automated differential backup/update from Disk A to Disk B while most importantly maintaining boot-ability on the external drive. And I would prefer to do this differentially to minimize stress on the drives. Hence rsync was the first thing to come to mind. What I have tried: following along with Jamie Zawinski's differential mac bootable backup solution running this manually initially worked - i tested it with only very miniscule file change and everything was fine / external booted and all. now after subsequent passes rsync fails throwing errors particularly relating to updating 'boot.efi' (not at the machine currently I will update the precise log message once I return home) is this a drive partition size issue? does rsync require more space? if it cant be done, are there any alternatives? i've heard whispers of dd

    Read the article

  • Advice on resizing 1280*720 for web audiences.

    - by jamiethompson90
    Forgive my spelling, I'm posting this from my mobile. I've recently decided to record videos to help teach a visual language. My camera likes to boast it can record in 1280; its a cheap camera about £75 so the quality isn't amazing. But its okay. Anyways, it has some other settings for lower res, but I figure might as well record in a larger size in case the need arises for a bigger source file in the future. I've been looking at jw player to play the converted files (mp4 to flv I think). What do you think a good size would be to convert to? I want to to look nice and clear remembering it is a visual language so lip patterns, facial expressions, body movement, fingers etc are all important, sound is not that important but I would like to have a choice to toggle captions. Thanks for any help, any advice apreciated, first time I have done a video project! P.s. If anyones interested its BSL. Jamie

    Read the article

  • Reference for proper handling of PID file on Unix

    - by bignose
    Where can I find a well-respected reference that details the proper handling of PID files on Unix? On Unix operating systems, it is common practice to “lock” a program (often a daemon) by use of a special lock file: the PID file. This is a file in a predictable location, often ‘/var/run/foo.pid’. The program is supposed to check when it starts up whether the PID file exists and, if the file does exist, exit with an error. So it's a kind of advisory, collaborative locking mechanism. The file contains a single line of text, being the numeric process ID (hence the name “PID file”) of the process that currently holds the lock; this allows an easy way to automate sending a signal to the process that holds the lock. What I can't find is a good reference on expected or “best practice” behaviour for handling PID files. There are various nuances: how to actually lock the file (don't bother? use the kernel? what about platform incompatibilities?), handling stale locks (silently delete them? when to check?), when exactly to acquire and release the lock, and so forth. Where can I find a respected, most-authoritative reference (ideally on the level of W. Richard Stevens) for this small topic?

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

< Previous Page | 9 10 11 12 13 14  | Next Page >