Search Results

Search found 875 results on 35 pages for 'segment'.

Page 19/35 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Router for Infrastructure Network

    - by amfortas
    We have an HPC operation that down the years has grown to several racks of gear at three sites, hooked up via Gigabit fiber and Catalyst 2960s (we control the links and switches). Thus far all machines have been on a flat RCF1918 10/8 but we are looking to segment the network in order to streamline matters for iSCSI and generally keep infrastructure equipment away from our end-users. We have now reached a point where we need to consider introducing VLANs for specific subnets and are wondering if it would be worthwhile in the longer run to acquire a small router to keep to keep track of all this stuff and cut down on the complexity of netmasks and routes on host machines, etc. Has anyone here had a similar experience? Suggestions as to suitable equipment would be welcome.

    Read the article

  • Accessing large log files on a unix machine with textpad

    - by Jason
    Hi, I'm interested to access large log files on a unix server with textpad. (textpad for history reasons, i personally prefer ofcourse less awk grep etc) but I have many personal who rather be using textpad they have years of experience with it and can tweak it to do whatever they want. The problem is that if i connect for example with winscp to get the log files to textpad it first fetches the full log and user needs to wait and it bloats etc. I would rather the textpad to somehow access the unix machine and get only the relevant segment of the log file (large log files could be GB) anyone knows how can this be achieved?

    Read the article

  • HDDs randomly falling off raid

    - by michael
    I really need help on this it's Saturday long weekend.No customer service help:( .I recently build new server/light duty desktop.Main purpose is only file sharing.Raid configuration :Adaptec 6805 ,8x 3TB HDD WD Red,Intel RES2SV240 expander,Raid 6,set in Intel mobo DZ 77GA-70K.I upgraded firmwares, but I'm having strange problem. During Build Verify segment 7 got missing.I just reinsert drive into hot swap bay and it started to rebuild Array.After rebuild was done another segments 0 and 5 gone missing while build/verify.I reinserted drives and now I'm praying that raid is going to rebuild successfully from remaining 6 drives,because i already transfer some data on it(I know it was bad idea).I checked S.M.A.R.T on missing drives , it only says link failure and aborted commands on one of them.No Errors on HDDs.Connections and cables are good.I added 2 fans blowing on RAID controller because it was getting too hot, so I guess overheating shouldn't be issue.What can be possibly wrong? Thank you for help.

    Read the article

  • Get the Google Analytics page views for a directory [closed]

    - by Michael Morisy
    I have a blog network set up with the following schema: Blogs: example.com/blog-1/ example.com/blog-2/ example.com/blog-3/ Posts: example.com/blog-1/great-post.html example.com/blog-1/cool-post.html example.com/blog-1/alright-post.html example.com/blog-2/awesome-post.html example.com/blog-2/interesting-post.html example.com/blog-2/dull-post.html example.com/blog-3/another-post.html example.com/blog-3/favorite-post.html I'm trying to get active page views in Google Analytics for each blog, so all example.com/blog-1/*. To this, I created an advanced segment in Analytics: Page starts with /blog-1/ This works, but it also pulls in any page on my site that links to that blog. Any suggestions to just get pages with those blogs.

    Read the article

  • Are there web search engines with exact match or regex capabilities (for related terms)?

    - by naxa
    Every once in a while I come upon a situation when google's way of searching results in too broad results even if I enclose my search terms in quotation marks. For example now I've tried to find pages that contain both "py.path" and "path.py" without much success. I'm currently aware of engines|sites like Google Code Search for searching actual code and (apparently) Stackoverflow for searching QAs, symbolhound that lets me find symbols, and also wikipedia is often a good place to find lists of symbols. But none of these seems to perform very good in matching exactly on search term pairs (or tuples) and use a broad-enough segment of the web. Is there a website that is good with exactly finding search term pairs? (Why not?)

    Read the article

  • Image annotation with Inkscape, Pointer and explanation for objects in picture

    - by None
    I need straight paths with end markers for associating text to parts of the image. For better readability, it needs to be high-contrast, i.e. a white line with black outline. Stroke to path will create a group of a box and a circle from the line with end marker. This makes placement of the markers more difficult, as with the node tool it is just a matter of dragging the end nodes of a line segment. I will try to place the markers as line for now and only finally converting them to an outline.

    Read the article

  • Delphi: how to create Firebird database programmatically

    - by Brad
    I'm using D2K9, Zeos 7Alpha, and Firebird 2.1 I had this working before I added the autoinc field. Although I'm not sure I was doing it 100% correctly. I don' know what order to do the SQL code, with the triggers, Generators, etc.. I've tried several combinations, I'm guessing I'm doing something wrong other than just that for this not to work. SQL File From IB Expert : /********************************************/ /* Generated by IBExpert 5/4/2010 3:59:48 PM / /*********************************************/ /********************************************/ /* Following SET SQL DIALECT is just for the Database Comparer / /*********************************************/ SET SQL DIALECT 3; /********************************************/ /* Tables / /*********************************************/ CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; CREATE TABLE EMAIL_ACCOUNTS ( ID INTEGER NOT NULL, FNAME VARCHAR(35), LNAME VARCHAR(35), ADDRESS VARCHAR(100), CITY VARCHAR(35), STATE VARCHAR(35), ZIPCODE VARCHAR(20), BDAY DATE, PHONE VARCHAR(20), UNAME VARCHAR(255), PASS VARCHAR(20), EMAIL VARCHAR(255), CREATEDDATE DATE, "ACTIVE" BOOLEAN DEFAULT 0 NOT NULL /* BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) /, BANNED BOOLEAN DEFAULT 0 NOT NULL / BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) /, "PUBLIC" BOOLEAN DEFAULT 0 NOT NULL / BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) */, NOTES BLOB SUB_TYPE 0 SEGMENT SIZE 1024 ); /********************************************/ /* Primary Keys / /*********************************************/ ALTER TABLE EMAIL_ACCOUNTS ADD PRIMARY KEY (ID); /********************************************/ /* Triggers / /*********************************************/ SET TERM ^ ; /********************************************/ /* Triggers for tables / /*********************************************/ /* Trigger: EMAIL_ACCOUNTS_BI */ CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.ID IS NULL) THEN NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1); END ^ SET TERM ; ^ /********************************************/ /* Privileges / /*********************************************/ Triggers: /********************************************/ /* Following SET SQL DIALECT is just for the Database Comparer / /*********************************************/ SET SQL DIALECT 3; CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; SET TERM ^ ; CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.ID IS NULL) THEN NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1); END ^ SET TERM ; ^ Generators: CREATE SEQUENCE GEN_EMAIL_ACCOUNTS_ID; ALTER SEQUENCE GEN_EMAIL_ACCOUNTS_ID RESTART WITH 2; /* Old syntax is: CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; SET GENERATOR GEN_EMAIL_ACCOUNTS_ID TO 2; */ My Code: procedure TForm2.New1Click(Sender: TObject); var query:string; begin if JvOpenDialog1.Execute then begin ZConnection1.Disconnect; ZConnection1.Database:= jvOpenDialog1.FileName; if not FileExists(ZConnection1.database) then begin ZConnection1.Properties.Add('createnewdatabase=create database '''+ZConnection1.Database+''' user ''sysdba'' password ''masterkey'' page_size 4096 default character set iso8859_2;'); try ZConnection1.Connect; except ShowMessage('Error Connection To Database File'); application.Terminate; end; end else begin ShowMessage('Database File Already Exists.'); exit; end; end; query := 'CREATE DOMAIN BOOLEAN AS SMALLINT CHECK (value is null or value in (0, 1))'; Zconnection1.ExecuteDirect(query); query:='CREATE TABLE EMAIL_ACCOUNTS (ID INTEGER NOT NULL,FNAME VARCHAR(35),LNAME VARCHAR(35),'+ 'ADDRESS VARCHAR(100), CITY VARCHAR(35), STATE VARCHAR(35), ZIPCODE VARCHAR(20),' + 'BDAY DATE, PHONE VARCHAR(20), UNAME VARCHAR(255), PASS VARCHAR(20),' + 'EMAIL VARCHAR(255),CREATEDDATE DATE , '+ '"ACTIVE" BOOLEAN DEFAULT 0 NOT NULL,'+ 'BANNED BOOLEAN DEFAULT 0 NOT NULL,'+ '"PUBLIC" BOOLEAN DEFAULT 0 NOT NULL,' + 'NOTES BLOB SUB_TYPE 0 SEGMENT SIZE 1024)'; //ZConnection.ExecuteDirect('CREATE TABLE NOTES (noteTitle TEXT PRIMARY KEY,noteDate DATE,noteNote TEXT)'); Zconnection1.ExecuteDirect(query); { } query := 'CREATE SEQUENCE GEN_EMAIL_ACCOUNTS_ID;'+ 'ALTER SEQUENCE GEN_EMAIL_ACCOUNTS_ID RESTART WITH 1'; Zconnection1.ExecuteDirect(query); query := 'ALTER TABLE EMAIL_ACCOUNTS ADD PRIMARY KEY (ID)'; Zconnection1.ExecuteDirect(query); query := 'SET TERM ^'; Zconnection1.ExecuteDirect(query); query := 'CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS'+ 'ACTIVE BEFORE INSERT POSITION 0'+ 'AS'+ 'BEGIN'+ 'IF (NEW.ID IS NULL) THEN'+ 'NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1);'+ 'END'+ '^'+ 'SET TERM ; ^'; Zconnection1.ExecuteDirect(query); ZTable1.Active:=true; end;

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • How to code a time delay between plotting points in JFreeCharts?

    - by Javajava
    Is there a way to animate the plotting process of an xy-line chart using JFreeCharts? I want to be able to watch the program draw each line segment and connect them. For example, if I paste this into the TextArea, "gtgtaaacaatatatggcg," I want to watch it graph each line segment one by one. Thanks in advance! :) My code is below: import java.util.Scanner; import java.applet.Applet; import java.awt.; import java.awt.event.ActionListener; import java.awt.event.ActionEvent; import org.jfree.chart.; import org.jfree.chart.plot.PlotOrientation; import org.jfree.data.xy.*; public class RandomWalkComplete extends Applet implements ActionListener { Panel panel; TextArea textarea, outputArea; Button move,exit; String thetext; Scanner reader = new Scanner(System.in); String thetext2; Label instructions; int x,y; public void init() { setSize(500,250); //set size of applet instructions=new Label("Paste the gene sequences in the " + "text field and press the graph button."); add(instructions); panel = new Panel(); add(panel); setVisible(true); textarea= new TextArea(10,20); add(textarea); move=new Button("Graph"); move.addActionListener(this); add(move); exit=new Button("Exit"); exit.addActionListener(this); add(exit); } public void actionPerformed(ActionEvent e) { XYSeries series = new XYSeries("DNA Walk",false,true); x= 0; y = 0; series.add(x,y); if(e.getSource() == move) { thetext=textarea.getText(); //the text is the DNA bases pasted thetext=thetext.replaceAll(" ",""); //removes spaces thetext2 = ""; for(int i=0; i<thetext.length(); i++) { char a = thetext.charAt(i); switch (a) { case 'A': case 'a'://moves right x+=1; y+=0; series.add(x,y); break; case 'C': case 'c': //moves up x+=0; y+=1; series.add(x,y); break; case 'G': case 'g': //move left x-=1; y+=0; series.add(x,y); break; case 'T': case 't'://move down x+=0; y-=1; series.add(x,y); break; default: // series.add(0,0); break; } } XYDataset xyDataset = new XYSeriesCollection(series); JFreeChart chart = ChartFactory.createXYLineChart ("DNA Random Walk", "", "", xyDataset, PlotOrientation.VERTICAL, true, true, false); ChartFrame frame1=new ChartFrame("DNA Random Walk",chart); frame1.setVisible(true); frame1.setSize(300,300); } if(e.getSource()==exit) {System.exit(0);} } public void stop(){} }

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • Podcast: The Invisible UI : Natural User Interfaces with Josh Blake

    - by craigshoemaker
    Josh Blake of Infostrat joins Pixel8 to discuss NUI development in .NET. Josh is the author of the upcoming book Multitouch on Windows from Manning. Reaching far beyond theory and the niche market of Microsoft Surface, NUI development is now possible with Silverlight and WPF development on Windows 7 and Windows 7 Mobile devices. Subscribe to the podcast! The Natural User Interface (NUI) was a prominent force at MIX10. What is NUI? Wikipedia defines it as: Natural user interface, or NUI, is the common parlance used by designers and developers of computer interfaces to refer to a user interface that is effectively invisible, or becomes invisible with successive learned interactions, to its users. The word natural is used because most computer interfaces use artificial control devices whose operation has to be learned. A NUI relies on a user being able to carry out relatively natural motions, movements or gestures that they quickly discover control the computer application or manipulate the on-screen content. The most descriptive identifier of a NUI is the lack of a physical keyboard and/or mouse. In our interview Josh demystifies what NUI is, makes a distinction between gestures and manipulations, and talks about what is possible today for NUI development. For more from Josh make sure to check out his book: and watch his MIX Presentation: Developing Natural User Interfaces with Microsoft Silverlight and WPF 4 Touch Resources Mentioned in the Show Check out the following videos that show the roots and future of NUI development: Jeff Han's Multi-Touch TED Presentation Microsoft Surface Project Natal MIX10 Day 2 Keynote A few times during our talk Bill Buxton’s work is mentioned. To see his segment of the MIX10 day 2 keynote, click below:

    Read the article

  • Podcast Show Notes: Evolving Enterprise Architecture

    - by Bob Rhubart
    The latest series of ArchBeat podcast programs grew out of another virtual meet-up, held on March 11. As with previous meet-ups, I sent out a general invitation to the roster of previous ArchBeat panelists to join me on Skype to talk about whatever topic comes up. For this event, Oracle ACE Directors Mike van Alst and Jordan Braunstein  showed up, along with Oracle product manager Jeff Davies.  The result was an impressive and wide-ranging discussion on the evolution of Enterprise Architecture, the role of technology in EA, the impact of social computing, and challenge of having three generations of IT people at work in the enterprise – each with different perspectives on technology. Mike, Jordan, and Jeff talked for more than an hour, and the conversation was so good that slicing and dicing it to meet the time constraints for these podcasts has been a challenge. The first two segments of the conversation are now available. Listen to Part 1 Listen to Part 2 Part 3 will go live next week, and an unprecedented fourth segment will follow. These guys have strong opinions, and while there is common ground, they don’t always agree. But isn’t that what a community is all about? I suspect that you’ll have questions and comments after listening, so I encourage you to reach out to Mike, Jordan, and Jeff  via the following links: Mike van Alst Blog | Twitter | LinkedIn | Business |Oracle Mix | Oracle ACE Profile Jordan Braunstein Blog | Twitter | LinkedIn | Business | Oracle Mix | Oracle ACE Profile Jeff Davies Homepage | Blog | LinkedIn | Oracle Mix (Also check out Jeff’s book: The Definitive Guide to SOA: Oracle Service Bus)   Coming Soon ArchBeat’s microphones were there for the panel discussions at the recent Oracle Technology Network Architect Days in Dallas and Anaheim. Excerpts from those conversations will be available soon. Stay tuned: RSS Technorati Tags: oracle,otn,enterprise architecture,podcast. arch2arch,archbeat del.icio.us Tags: oracle,otn,enterprise architecture,podcast. arch2arch,archbeat

    Read the article

  • You Can't Win on Price

    - by David Dorf
    This year I did the majority of my Christmas shopping from the comfort of my home office. There aren't many things in stores you can't find online these days. I find it easier to search, research, and compare products online rather than walking the mall anyway. But there's a segment of the population that likes to be in the store, touching the products. For those people, smartphones avail them some of the e-commerce features I mentioned right there in the aisles. First it was RedLaser, then TheFind, ShopSavvy and many others. But the one that should be scaring retailers is Amazon's PriceCheck application. It lets you scan the product barcode, take a picture of the product, or speak the product's name. Once the product is identified, it shows the online prices, with Amazon at the top of the list. Within 10 seconds you can order the item and Amazon Prime members get free 2-day shipping too. I don't think fashion and grocery retailers need to worry much, but I have to believe smartphones are helping Amazon win a little more of the brand-name hardgoods market. So what's a retailer to do? Best Buy has begun to put QR Codes on their shelf labels that are easily scanned by smartphones and take the consumer to a Best Buy Web page where they can get extended information about the product. The consumer is getting the additional information they want, and Best Buy avoids the price comparisons. Of course if a consumer chooses to use the Amazon PriceCheck app, then all bets are off. That's when Best Buy has to hope the in-store experience and customer service will save the sale. My point is that the internet makes information available to everyone, and smartphones make it available anywhere. Unless you want your store to be Amazon's local showroom, you need to be price-competitive but differentiate on other aspects of the shopping experience. With the cost of running a physical store, you can't win on price.

    Read the article

  • Requirement refinement between two levels of specification

    - by user107149
    I am currently working on the definition of the documentation architecture of a system, from customers needs to software/hardware requirements. I encounter a big problem with the level of refinement of requirements. The classic architecture is : PTS -- SSS -- SSDD -- SRS/HRS with PTS : Purshaser Technical Specification SSS : Supplier System Specification SSDD : System Segment Design Description SRS / HRS : Software / Hardware Requirement Specification. Requirements from PTS are reworked in SSS, this document only expressed the needs (no design requirements are defined at this level). Then, the system design is described in SSDD : we allocate requirements from the SSS to functions from the design and functions are then allocated to component (Software or hardware) (we are still at the SSDD level). Finally, for each component, we write one SRS or one HRS. Requirements in SRS or HRS are refinement of requirements from SSS (and traceability matrix are made between these two levels). My problem is the following one : Our system is a complex one, and some of the requirements in the SSS needs to be refined twice to be at the right level in the SRS (means that software people can understand the requirement to make their coding). But, with this document architecture, I can only refine once the requirements from the SSS. The second problem is that only a part of the requirements from the SSS needs to be refined twice. The other part only need one refinement. On the picture below, the green boxes are requirements at the right level for SRS or HRS. And purple boxes are intermediate requirements which can not be included in SSS since they are design requirements. Where can I put these purple requirements ?? Is there someone who has already encountered this problem ? Should I write two documents at SRS level ? Should I include intermediate requirements in SSDD ? Should I includes the two refinement levels (purple and green) in the same SRS document (not sure that's possible since a SRS is only for one component) ??? Thanks for your help and expertise ;-)

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • Documentation in Oracle Retail Analytics, Release 13.3

    - by Oracle Retail Documentation Team
    The 13.3 Release of Oracle Retail Analytics is now available on the Oracle Software Delivery Cloud and from My Oracle Support. The Oracle Retail Analytics 13.3 release introduced significant new functionality with its new Customer Analytics module. The Customer Analytics module enables you to perform retail analysis of customers and customer segments. Market basket analysis (part of the Customer Analytics module) provides insight into which products have strong affinity with one another. Customer behavior information is obtained from mining sales transaction history, and it is correlated with customer segment attributes to inform promotion strategies. The ability to understand market basket affinities allows marketers to calculate, monitor, and build promotion strategies based on critical metrics such as customer profitability. Highlighted End User Documentation Updates With the addition of Oracle Retail Customer Analytics, the documentation set addresses both modules under the single umbrella name of Oracle Retail Analytics. Note, however, that the modules, Oracle Retail Merchandising Analytics and Oracle Retail Customer Analytics, are licensed separately. To accommodate new functionality, the Retail Analytics suite of documentation has been updated in the following areas, among others: The User Guide has been updated with an overview of Customer Analytics. It also contains a list of metrics associated with Customer Analytics. The Operations Guide provides details on Market Basket Analysis as well as an updated list of APIs. The program reference list now also details the module (Merchandising Analytics or Customer Analytics) to which each program applies. The Data Model was updated to include new information related to Customer Analytics, and a new section, Market Basket Analysis Module, was added to the document with its own entity relationship diagrams and data definitions. List of Documents The following documents are included in Oracle Retail Analytics 13.3: Oracle Retail Analytics Release Notes Oracle Retail Analytics Installation Guide Oracle Retail Analytics User Guide Oracle Retail Analytics Implementation Guide Oracle Retail Analytics Operations Guide Oracle Retail Analytics Data Model

    Read the article

  • SharePoint Apps and Windows Azure

    - by ScottGu
    Last Monday I had an opportunity to present as part of the keynote of this year’s SharePoint Conference.  My segment of the keynote covered the new SharePoint Cloud App Model we are introducing as part of the upcoming SharePoint 2013 and Office 365 releases.  This new app model for SharePoint is additive to the full trust solutions developers write today, and is built around three core tenants: Simplifying the development model and making it consistent between the on-premises version of SharePoint and SharePoint Online provided with Office 365. Making the execution model loosely coupled – and enabling developers to build apps and write code that can run outside of the core SharePoint service. This makes it easy to deploy SharePoint apps using Windows Azure, and avoid having to worry about breaking SharePoint and the apps within it when something is upgraded.  This new loosely coupled model also enables developers to write SharePoint applications that can leverage the full capabilities of the .NET Framework – including ASP.NET Web Forms 4.5, ASP.NET MVC 4, ASP.NET Web API, EF 5, Async, and more. Implementing this loosely coupled model using standard web protocols – like OAuth, JSON, and REST APIs – that enable developers to re-use skills and tools, and easily integrate SharePoint with Web and Mobile application architectures. A video of my talk + demos is now available to watch online: In the talk I walked through building an app from scratch – it showed off how easy it is to build solutions using new SharePoint application, and highlighted a web + workflow + mobile scenario that integrates SharePoint with code hosted on Windows Azure (all built using Visual Studio 2012 and ASP.NET 4.5 – including MVC and Web API). The new SharePoint Cloud App Model is something that I think is pretty exciting, and it is going to make it a lot easier to build SharePoint apps using the full power of both Windows Azure and the .NET Framework.  Using Windows Azure to easily extend SaaS based solutions like Office 365 is also a really natural fit and one that is going to offer a bunch of great developer opportunities.  Hope this helps, Scott  P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • 1.5 million Windows 7 phone’s sold…

    - by Boonei
    Microsoft announced that it has sold over 1.5 million windows 7 phone devices. Windows 7 is a new generation of OS. Mobile operators/users/device programmers need to adopt the same. Its not going to be a easy transition because it’s not an advanced/next version of win 6.x for mobile. We have heard that development from Microsoft side for Win 6.x devices will not continue after sometime. Don’t know how long will get the support! Everything in it s quite new, like OS, User interface, XBox sync, and also requires mobile phone companies to run the OS on high end chips, meaning atleast 1GHz. So the user segment occupied by phones like HTC Wild Fire are not the ones targeted.   Hey ! There an is a catch with this magic number 1.5 million…. It depicts only the number of units sold to mobile operators and retailers. It’s not the number of actual units held in consumers hands and activated. The number could improve significantly in 2011 where Sprint and Verizon join the party in United States. Atleast dozen phone models are in line up now in the rest of the world running Win 7 OS. One good things that customers can rejoice is that Microsoft will direly push software updates to all its consumers. Operator will not interfere. We can expect strong sales going forward with just this important point where Google’s Android lacks the same. [Img Credit: Microsoft] This article titled,1.5 million Windows 7 phone’s sold…, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • #altnetseattle &ndash; CQRS

    - by GeekAgilistMercenary
    This is a topic I know nothing about, and thus, may be supremely disparate notes.  Have fun translating.  : )   . . .and coolness that the session is well past capacity. Separates things form the UI and everything that needs populated is done through commands.  The domain and reports have separate storage. Events populate these stores of data, such as "sold event". What it looks like, is that the domain controls the requests by event, which would be a product order or something similar. Event sourcing is a key element of the logic. DDD (Domain Driven Design) is part of the core basis for this methodology/structure. The architecture/methodology/structure is perfect for blade style plugin hardware as needed. Good blog entry DDDD: Why I love CQRS and another Command and Query Responsibility Segregation (CQRS), more, CQRS à la Greg Young, a bit by Udi Dahan and there are more.  Google, Bing, etc are there for a reason. It appears the core underpinning architectural element of this is the break out of unique identifiable actions, or I suppose better described as events.  Those events then act upon specific pipelines such as read requests, write requests, etc.  I will be doing more research on this topic and will have something written up shortly.  At this time it seems like nothing new, just a large architectural break out of identifiable needs of the entire enterprise system.  The reporting is in one segment of the architecture, the domain is in another, hydration broken out to interfaces, and events are executed to incur events on the Reports, or what appears by the description to be events on the domain. Anyway, more to come on this later.

    Read the article

  • Libgdx - 2D Mesh rendering overlap glitch

    - by user46858
    I am trying to render a 2D circle segment mesh (quarter circle)using Libgdx/Opengl ES 2.0 but I seem to be getting an overlapping issue as seen in the picture attached. I cant seem to find the cause of the problem but the overlapping disappears/reappears if I drag and resize the window to random sizes. The problem occurs on both pc and android. The strange thing is the first two segments atleast dont seem to be causing any overlapping only the third and/or forth segment.......even though they are all rendered using the same mesh object..... I have spent ages trying to find the cause of the problem before posting here for help so ANY help/advice in finding the cause of this problem would be really appreciated. public class MyGdxGame extends Game { private SpriteBatch batch; private Texture texture; private OrthographicCamera myCamera; private float w; private float h; private ShaderProgram circleSegShader; private Mesh circleScaleSegMesh; private Stage stage; private float TotalSegments; Vector3 virtualres; @Override public void create() { w = Gdx.graphics.getWidth(); h = Gdx.graphics.getHeight(); batch = new SpriteBatch(); ViewPortsize = new Vector2(); TotalSegments = 4.0f; virtualres = new Vector3(1280.0f, 720.0f, 0.0f); myCamera = new OrthographicCamera(); myCamera.setToOrtho(false, w, h); texture = new Texture(Gdx.files.internal("data/libgdx.png")); texture.setFilter(TextureFilter.Linear, TextureFilter.Linear); circleScaleSegMesh = createCircleMesh_V3(0.0f,0.0f,200.0f, 30.0f,3, (360.0f /TotalSegments) ); circleSegShader = loadShaderFromFile(new String("circleseg.vert"), new String("circleseg.frag")); shaderProgram.pedantic = false; stage = new Stage(); stage.setViewport(new ExtendViewport(w, h)); Gdx.input.setInputProcessor(stage); } @Override public void render() { .... //render renderInit(); renderCircleScaledSegment(); } @Override public void resize(int width, int height) { stage.getViewport().update(width, height, true); myCamera.position.set( virtualres.x/2.0f, virtualres.y/2.0f, 0.0f); myCamera.update(); } public void renderInit(){ Gdx.gl20.glClearColor(1.0f, 1.0f, 1.0f, 0.0f); Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); batch.setShader(null); batch.setProjectionMatrix(myCamera.combined); } public void renderCircleScaledSegment(){ Gdx.gl20.glEnable(GL20.GL_DEPTH_TEST); Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA); Gdx.gl20.glEnable(GL20.GL_BLEND); batch.begin(); circleSegShader.begin(); Matrix4 modelMatrix = new Matrix4(); Matrix4 cameraMatrix = new Matrix4(); Matrix4 cameraMatrix2 = new Matrix4(); Matrix4 cameraMatrix3 = new Matrix4(); Matrix4 cameraMatrix4 = new Matrix4(); cameraMatrix = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f)).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix.mul(modelMatrix); cameraMatrix2 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f) +(360.0f /TotalSegments) ).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix2.mul(modelMatrix); cameraMatrix3 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f), 0.0f - ((360.0f /TotalSegments)/ 2.0f) +(2*(360.0f /TotalSegments))).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix3.mul(modelMatrix); cameraMatrix4 = myCamera.combined.cpy(); modelMatrix.idt().rotate(new Vector3(0.0f,0.0f,1.0f),0.0f - ((360.0f /TotalSegments)/ 2.0f) +(3*(360.0f /TotalSegments)) ).trn(virtualres.x/2.0f,virtualres.y/2.0f, 0.0f); cameraMatrix4.mul(modelMatrix); Vector3 box2dpos = new Vector3(0.0f, 0.0f, 0.0f); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix2); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix3); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.setUniformMatrix("u_projTrans", cameraMatrix4); circleSegShader.setUniformf("u_box2dpos", box2dpos); circleSegShader.setUniformi("u_texture", 0); texture.bind(); circleScaleSegMesh.render(circleSegShader, GL20.GL_TRIANGLES); circleSegShader.end(); batch.flush(); batch.end(); Gdx.gl20.glDisable(GL20.GL_DEPTH_TEST); Gdx.gl20.glDisable(GL20.GL_BLEND); } public Mesh createCircleMesh_V3(float cx, float cy, float r_out, float r_in, int num_segments, float segmentSizeDegrees){ float theta = (float) (2.0f * MathUtils.PI / (num_segments * (360.0f / segmentSizeDegrees))); float c = MathUtils.cos(theta);//precalculate the sine and cosine float s = MathUtils.sin(theta); float t,t2; float x = r_out;//we start at angle = 0 float y = 0; float x2 = r_in;//we start at angle = 0 float y2 = 0; float[] meshCoords = new float[num_segments *2 *3 *7]; int arrayIndex = 0; //array for triangles without indices for(int ii = 0; ii < num_segments; ii++) { meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; t = x; x = c * x - s * y; y = s * t + c * y; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; meshCoords[arrayIndex] = x+cx; meshCoords[arrayIndex +1] = y+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; t2 = x2; x2 = c * x2 - s * y2; y2 = s * t2 + c * y2; meshCoords[arrayIndex] = x2+cx; meshCoords[arrayIndex +1] = y2+cy; meshCoords[arrayIndex +2] = 0.0f; meshCoords[arrayIndex +3] = 63.0f/255.0f; meshCoords[arrayIndex +4] = 139.0f/255.0f; meshCoords[arrayIndex +5] = 217.0f/255.0f; meshCoords[arrayIndex +6] = 0.7f; arrayIndex = arrayIndex + 7; } Mesh myMesh = new Mesh(VertexDataType.VertexArray, false, meshCoords.length, 0, new VertexAttribute(VertexAttributes.Usage.Position, 3, "a_position"), new VertexAttribute(VertexAttributes.Usage.Color, 4, "a_color")); myMesh.setVertices(meshCoords); return myMesh; } }

    Read the article

  • TRADACOMS Support in B2B

    - by Dheeraj Kumar
    TRADACOMS is an initial standard for EDI. Predominantly used in the retail sector of United Kingdom. This is similar to EDIFACT messaging system, involving ecs file for translation and validation of messages. The slight difference between EDIFACT and TRADACOMS is, 1. TRADACOMS is a simpler version than EDIFACT 2. There is no Functional Acknowledgment in TRADACOMS 3. Since it is just a business message to be sent to the trading partner, the various reference numbers at STX, BAT, MHD level need not be persisted in B2B as there is no Business logic gets derived out of this. Considering this, in AS11 B2B, this can be handled out of the box using Positional Flat file document plugin. Since STX, BAT segments which define the envelope details , and part of transaction, has to be sent from the back end application itself as there is no Document protocol parameters defined in B2B. These would include some of the identifiers like SenderCode, SenderName, RecipientCode, RecipientName, reference numbers. Additionally the batching in this case be achieved by sending all the messages of a batch in a single xml from backend application, containing total number of messages in Batch as part of EOB (Batch trailer )segment. In the case of inbound scenario, we can identify the document based on start position and end position of the incoming document. However, there is a plan to identify the incoming document based on Tradacom standard instead of start/end position. Please email to [email protected] if you need a working sample.

    Read the article

  • Fixing Chrome&rsquo;s AJAX Request Caching Bug

    - by Steve Wilkes
    I recently had to make a set of web pages restore their state when the user arrived on them after clicking the browser’s back button. The pages in question had various content loaded in response to user actions, which meant I had to manually get them back into a valid state after the page loaded. I got hold of the page’s data in a JavaScript ViewModel using a JQuery ajax call, then iterated over the properties, filling in the fields as I went. I built in the ability to describe dependencies between inputs to make sure fields were filled in in the correct order and at the correct time, and that all worked nicely. To make sure the browser didn’t cache the AJAX call results I used the JQuery’s cache: false option, and ASP.NET MVC’s OutputCache attribute for good measure. That all worked perfectly… except in Chrome. Chrome insisted on retrieving the data from its cache. cache: false adds a random query string parameter to make the browser think it’s a unique request – it made no difference. I made the AJAX call a POST – it made no difference. Eventually what I had to do was add a random token to the URL (not the query string) and use MVC routing to deliver the request to the correct action. The project had a single Controller for all AJAX requests, so this route: routes.MapRoute( name: "NonCachedAjaxActions", url: "AjaxCalls/{cacheDisablingToken}/{action}", defaults: new { controller = "AjaxCalls" }, constraints: new { cacheDisablingToken = "[0-9]+" }); …and this amendment to the ajax call: function loadPageData(url) { // Insert a timestamp before the URL's action segment: var indexOfFinalUrlSeparator = url.lastIndexOf("/"); var uniqueUrl = url.substring(0, indexOfFinalUrlSeparator) + new Date().getTime() + "/" + url.substring(indexOfFinalUrlSeparator); // Call the now-unique action URL: $.ajax(uniqueUrl, { cache: false, success: completePageDataLoad }); } …did the trick.

    Read the article

  • wynapse.com down last night, SC tonight

    - by Dave Campbell
    In an industry segment that nobody is ever 'asleep', I suppose no time is a good time to take SQL Server down for upgrades, and I had forgotten that my host was going to be doing that. Last night about 9pm (Arizona), in the middle of working on a blog post, things started going wonky and I finally realized everything was ok except for SQL Server. I turned in a ticket on it and was reminded about the maintenance schedule... guess I file those away in memory and just assume they'll happen while I'm asleep :) So, looking at the schedule, it appears that SQL Server for SilverlightCream is going to be down tonight. Minimum is 9-12pm Arizona time... mileage and time may vary. Since all the posts are run through SC to get the Skim count, having SQL down sucks, but I'd rather we got maintenance than have to react to a crash because of something that wasn't maintained. I'll try to get the next 'Cream post out early so that the bulk of folks can dig through it prior to the outage. Meanwhile, for those of you in Phoenix, tonight is our Phoenix Silverlight User Group April meeting, and Joel Neubeck is going to be giving us the run-through on Windows Phone 7! We're not as advanced as those MVP rock-stars in California like Victor Gaudioso who streams his user group meeting, so you'll just have to show up for the goodness! And for anyone that's interested, here's some WP7 bling for your desktop... I want some of this real bad for my laptop! Get the full image in the post by Ozymandias:

    Read the article

  • Debugging Node.js applications for Windows Azure

    - by cibrax
    In case you are developing a new web application with Node.js for Windows Azure, you might notice there is no easy way to debug the application unless you are developing in an integrated IDE like Cloud9. For those that develop applications locally using a text editor (or WebMatrix) and Windows Azure Powershell for Node.js, it requires some steps not documented anywhere for the moment. I spent a few hours on this the other day I practically got nowhere until I received some help from Tomek and the rest of them. The IISNode version that currently ships with the Windows Azure for Node.js SDK does not support debugging by default, so you need to install the IISNode full version available in the github repository.  Once you have installed the full version, you need to enable debugging for the web application by modifying the web.config file <iisnode debuggingEnabled="true" loggingEnabled="true" devErrorsEnabled="true" /> The xml above needs to be inserted within the existing “<system.webServer/>” section. The last step is to open a WebKit browser (e.g. Chrome) and navigate to the URL where your application is hosted but adding the segment “/debug” to  the end. The full URL to the node.js application must be used, for example, http://localhost:81/myserver.js/debug That should open a new instance of Node inspector on the browser, so you can debug the application from there. Enjoy!!

    Read the article

  • Podcast Show Notes: Conversations in the Cloud

    - by Bob Rhubart
    The centerpiece of every OTN Architect Day event is a panel discussion the gathers all of the session speakers togehter to respond to questions from the audience. I generally try to record these discussions, usually by stiking my iPad on top of one of the PA speakers, with mixed results. Fortunately, the A/V tech at the venue for the Los Angeles event, held on October 25, 2012, had the necessary gear to get a good-quality recording of the panel discussion. So starting this week the OTN ArchBeat Podcast will feature a short series of highlights from those discussions. Listen to Part 1: Dude, What's My Role? Members of the Architect Day panel respond to an audience question about what happens to traditional IT roles in a cloud environment. Listen to Part 2: Migrating Mission-Critical Applications to the Cloud (Nov 21) The panel offers advice and examples in response to an audience question about dealing with mission-critical applications. Listen to Part 3: All Clouds Are Not Equal (Nov 28) The panel responds to a challenging question about cloud strategy with a discussion of enterprise-grade cloud services. Listen to Part 4: Cloud Security and Auditing (Dec 5) The last segment in the series is short discussion in response to an audience question about auditing and security in the cloud. The Panelists (Listed alphabetically) Ashok Aletty, Senior Director of Product Management, Oracle Cloud Application Foundation Dr. James Baty, Vice President, Oracle Global Enterprise Architecture Program Dave Chappelle, Enterprise Architect, Oracle Global Enterprise Architecture Program Jeff Davies, Senior Principal Product Manager, Oracle Corporation Anbu Krishnaswamy, Enterprise Architect, Oracle Global Enterprise Architecture Program Dhanraj Pondicherry, Sales Consulting Manager, Oracle Exadata Perren Walker, Senior Principal Product Manager, Oracle Enterprise Manager Coming Soon Upcoming programs will focus on DevOps and Continuous Integration, and on Oracle's Java Cloud and Developer Cloud services. Stay tuned: RSS

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >