Search Results

Search found 7618 results on 305 pages for 'named graph'.

Page 82/305 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Windows 7 search for files with special characters in name.

    - by Luke
    I have a rather large source code repository on my machine; it is not indexed by Windows Search. I am trying to find some oddly-named generated files of the form .#name.extension.version where name and extension are normal names and extensions and version is a numeric value (e.g. something like 1.186). On Windows XP I could find these files by searching for .#*; on Windows 7 that just returns every single file and directory. So my question is this: is it possible to find files named like that using the built-in Windows 7 search functionality? I did find this question which is very similar, but the answer doesn't work for me; it seems like any special character I put in the query is either ignored or treated as a wildcard, and as a result it matches every single file and directory. Is there perhaps some registry value I can set to make the search-by-filename feature work with special characters?

    Read the article

  • Interpolation on Cubism graphs

    - by Abe Stanway
    Cubism was designed, by mbostock's own words, for maximum information density - which means it generally wants to display one datapoint per pixel. While this is useful in many cases, it doesn't help when your data itself is not that dense. In these cases, you get ugly, staccato-style graphs like so: Is there a way to interpolate my data/graph within Cubism to show a nice, smoothed graph? EDIT: After adding keepLastValue to the metric, I get this: Here is the same data as shown in Graphite: I would like to smooth the Cubism view to look more like Graphite (with the added awesomeness of the horizon overplotting)

    Read the article

  • FFT in MATLAB: wrong 0Hz frequency

    - by roujhan
    Hello I want to use fft in MATLAB to analize some exprimental data saved as an excell file. my code: A=xlsread('Book.xls'); G=A'; x=G(2, : ); N=length(x); F=[-N/2:N/2-1]/N; X = abs(fft(x-mean(x),N)) X = fftshift(X); plot(F,X) But it plots a graph with a large 0Hz wrong component, my true frequency is about 395Hz and it is not shown in the plotted graph. Please tell me what is wrong. Any help would be appreciated.

    Read the article

  • How to convert a url to a browser like url? ( ...%20... )

    - by Kaoukkos
    I want to get data using CURL but I have a problem. When I set the url like this $url = "https://graph.facebook.com/fql?q=SELECT name FROM page"; // continues I do not have anything returned. When I copy the browser url, this is $url = "https://graph.facebook.com/fql?q=SELECT%20name%20FROM%20page"; I get the results through CURL. I tried htmlentities and htmlspecialchars without luck. What am I missing here? $ch = curl_init($url); curl_setopt($ch,CURLOPT_RETURNTRANSFER,1); $content = curl_exec($ch);

    Read the article

  • domain/IN: has no NS records

    - by thejartender
    I have set up a home web server using Ubuntu 12.10 and I can safely say that it works with regards to router forwarding and ports being found. I know this, because switched my hosting provider's VPS SOA record to use my ISP IP with an 'A' value and had my website running from home. This verified that my server was configured correctly so I started what I believe to be the final step in making my old desktop into a full DNS server. I found this tutorial that got me started My LAN network consists of the following: My router with a gateway of 10.0.0.zzz My server with an IP of 10.0.0.xxx A laptop with an IP of 10.0.0.yyy Step 1: I installed bind via sudo apt-get install bind9 Step2: I configured /etc/bind/named.conf.local with: zone "sognwebdesign.no" { type master; file "/etc/bind/zones/sognwebdesign.no.db"; }; zone "0.0.10.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.0.10.in-addr.arpa"; }; Step3: Updated /etc/bind/named.conf.options with two ISP DNS addresses Step 4: Updated /etc/resolv.confwith: nameserver 10.0.0.xxx search lan search sognwebdesign.no Step5: created a ``/etc/bind/zones directory Step6: Created /etc/bind/zones/sognwebdesign.no.dbwith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 3600 604800 38400 ); sognwebdesign.no. IN NS ns1.sognwebdesign.no. sognwebdesign.no. IN NS ns2.sognwebdesign.no. sognwebdesign.no. IN NS ns3.sognwebdesign.no. NS1 IN A 10.0.0.1 NS2 IN A 10.0.0.2 NS3 IN A 10.0.0.3 www IN A 10.0.0.4 yuccalaptop IN A 10.0.0.19 gw IN A 10.0.0.138 TXT "Network Gateway" Step 7: created/etc/bind/zones/rev.0.0.10.in-addr.arpawith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 604800 604800 86400 ); zzz IN PTR gw.sognwebdesign.no. 1 IN PTR ns1.sognwebdesign.no. 2 IN PTR ns2.sognwebdesign.no. 3 IN PTR ns3.sognwebdesign.no. yyy IN PTR yuccalaptop.sognwebdesign.no. I then restart bind and dig-x sognwebdesign.no and it works Lastly I perform named-checkzoneon each of my zone files, but me reverse zone fail fails with: sognwedesign.no/IN: has no NS records Can anyone explain what I am doing wrong here or assist me in getting this configured correctly?

    Read the article

  • Ubuntu reset network configuration

    - by user1103294
    When I boot up my ubuntu server, it cannot connect to my wireless network anymore. It says "waiting for network configuration" for 60 seconds, boots up, but no wireless. I suspect it's because of the following reasons. I used to connect to a wireless connection named 2WIRE555, password: 123abc But then I upgraded my connection and my new wireless connection was named 2WIRE444, password:111111 Being lazy, I simply renamed 2WIRE555 to 2WIRE444 and changed the password accordingly. I was hoping this would work but ever since then my network configurations is messed up. So back to the issue, how do I reset my network configurations for my Ubuntu 11.10 server?

    Read the article

  • How to get id vertex from name vertex in R and Igraph?

    - by user1310873
    I have a graph with names from 1 to 10 library(igraph) library(Cairo) g<- graph(c(0,1,0,4,0,9,1,7,1,9,2,9,2,3,2,5,3,6,3,9,4,5,4,8,5,8,6,7,6,8,7,8),n=10,dir=FALSE) V(g)$name<-c(1:10) V(g)$label<-V(g)$name coords <- c(0,0,13.0000,0,5.9982,5.9991,7.9973,7.0009,-1.0008,11.9999,0.9993,11.0002,7.9989,13.0009,10.9989,14.0009,5.9989,14.0009,7.0000,4.0000) coords <- matrix(coords, 10,2,byrow=T) plot(g,layout=coords) listMn<-neighborhood(g,1,0:9) I'd like to do this but in opposite way m1<-V(g)[listMn[[7]]]$name the above instructions gets, 7 4 8 9 how to get listMn[[7]]=6 3 7 8 from names 7 4 8 9?

    Read the article

  • Is there a transformation matrix that can scale the x and/or y axis logarithmically?

    - by Dave M
    I'm using .net WPF geometry classes to graph waveforms. I've been using the matrix transformations to convert from the screen coordinate space to my coordinate space for the waveform. Everything works great and it's really simple to keep track of my window and scaling, etc. I can even use the inverse transform to calculate the mouse position in terms of the coordinate space. I use the built in Scaling and Translation classes and then a custom matrix to do the y-axis flipping (there's not a prefab matrix for flipping). I want to be able to graph these waveforms on a log scale as well (either x axis or y axis or both), but I'm not sure if this is even possible to do with a matrix transformation. Does anyone know if this is possible, and if it is, what is the matrix?

    Read the article

  • Purely functional equivalent of weakhashmap?

    - by Jon Harrop
    Weak hash tables like Java's weak hash map use weak references to track the collection of unreachable keys by the garbage collector and remove bindings with that key from the collection. Weak hash tables are typically used to implement indirections from one vertex or edge in a graph to another because they allow the garbage collector to collect unreachable portions of the graph. Is there a purely functional equivalent of this data structure? If not, how might one be created? This seems like an interesting challenge. The internal implementation cannot be pure because it must collect (i.e. mutate) the data structure in order to remove unreachable parts but I believe it could present a pure interface to the user, who could never observe the impurities because they only affect portions of the data structure that the user can, by definition, no longer reach.

    Read the article

  • How to retrieve connection details of CheckPoint SSL Network Extender?

    - by amoe
    My workplace uses a Java-based VPN tool named CheckPoint SSL Network Extender. I would like to configure the VPN connection myself using stock OS tools, because I find the applet to be rather unstable. How would I go about getting all of the connection details needed to manually connect to the VPN? My workplace only supports the official client. When I am connected with the Java applet, if I run ipconfig /all I can see that a hidden network connection is created named Check Point Virtual Network Adapter For SSL Network Extender - Packet Scheduler Miniport. I can see the various IP and DNS details there as well. However, because I need to log in to the applet-based tool, I presume I need to export some kind of key in order to use OS tools to configure this. Is this even possible? Answers for any OS are great although I am using Windows XP to test, and also want to use Linux clients.

    Read the article

  • Origin of display connector numbers in XServer (e.g. HDMI1, HDMI2, DP1)

    - by Andreas N
    a custom mainboard has a DVI and a DisplayPort connector on the board. Currently, everything that is connected at DVI will be named "HDMI2" in XServer. I can see that by calling the "xrandr" tool (in Ubuntu Trusty Tahr). A display connected to the DP connector will be named "DP1" or "HDMI1", if I use a DP-to-DVI adapter. We are now testing a slightly upgraded board version, which has a newer CPU (Intel J1800, Baytrail) among other things and the position of the DVI and DP connectors are switched. Also, everything at the DVI port is called "HDMI1" and something connected to the DP port gets "DP2" or "HDMI2". Q: What causes these numbers to be produced in this manner and where (probably in the kernel) is it happening? I suspect the cause to be hardware related. Specifically, at which CPU pins the connector pins are routed and attached to. Q: Would it be possible to influence this numbering scheme in order to retain the previous numbering behaviour?

    Read the article

  • Deadlock Problem because of an Update Lock.

    - by Randy Minder
    We have a deadlock issue we're trying to track down. I have an deadlock graph (xdl) generated from Profiler. It shows the losing SQL statement as a simple Select statement, not an Update, Delete or Insert statement. The graph shows the losing Select statement as requesting a Shared lock on a resource **but also owning an Update lock on a resource**. This is what is baffling me. Why would a Select statement that is not part of an Insert, Update or Delete ever hold an Update lock on a resource? I should add that the Update lock it owns is on the table being selected against by the losing Select statement.

    Read the article

  • PHP/Oracle Connectivity randomly "drops out"

    - by user20555
    Hi! Here's the current situation - I have two web servers (for now named A and B) and two database servers (named C and D). The web servers are quite old, and are running an early version of Apache 2 + PHP4, while the DB servers are running Oracle 9i and 10g respectively. We're experiencing a strange problem connecting (via PHP code) to database A while on web server B only. Web server A has no issues at all... Randomly, web server B will report a "Not connected to Oracle" error (3114). I can't see a real pattern with this, but refreshing a few times seems to fix the issue. Apparently there are no drop-outs on the network interface, which leads me to believe that there's some misconfiguration between PHP/Apache and Oracle (which uses connection pooling). We're running SunOS 5.8... Any ideas?

    Read the article

  • I think I don't understand git branches

    - by Hans
    Salutations everyone, I have been working on a bash script as a small summer project to learn more about UNIX scripting and on using git. This has been the first time that I have used branches in git, normally I just stick to master. I was viewing the git log with the graph (git log --graph) when I noticed that my 'develop' branch seemed to have merged with 'master'. Something like this: master ----1--------3----4----5----6----HEAD develop \---2---/ but commits 3 onwards were done within the develop branch. Doing git checkout master and git checkout develop showed this to be true. What exactly is going on? Is this what is known as fast-forwarding? P.S.: Commits 1 and 2 are also a mystery to me being that commit 2 is actually an amendment of commit 1 (as far I thought, I used this advice)

    Read the article

  • Split a sting array into a number array in javascript

    - by Wesley Skeen
    I am reading a value from a dropdown list depending on what option is selected. I am using jqPlot to graph the values. jqPlot expects an array of values like [91, 6, 2, 57, 29, 40, 95] but when I read the value in from the dropdown box it is coming in as a whole string "[91, 6, 2, 57, 29, 40, 95]" I tried splitting it but I got ["91", "6", "2", "57", "29", "40", "95"] which wont display the graph correctly. Is there anybody that has encountered something like this before and what can I do to make my values convert into a number array. Thanks for any help

    Read the article

  • linq with Include and criteria

    - by JMarsch
    How would I translate this into LINQ? Say I have A parent table (Say, customers), and child (addresses). I want to return all of the Parents who have addresses in California, and just the california address. (but I want to do it in LINQ and get an object graph of Entity objects) Here's the old fashioned way: SELECT c.blah, a.blah FROM Customer c INNER JOIN Address a on c.CustomerId = a.CustomerId where a.State = 'CA' The problem I'm having with LINQ is that i need an object graph of concrete Entity types (and it can't be lazy loaded. Here's what I've tried so far: // this one doesn't filter the addresses -- I get the right customers, but I get all of their addresses, and not just the CA address object. from c in Customer.Include(c = c.Addresses) where c.Addresses.Any(a = a.State == "CA") select c // this one seems to work, but the Addresses collection on Customers is always null from c in Customer.Include(c = c.Addresses) from a in c.Addresses where a.State == "CA" select c; Any ideas?

    Read the article

  • How can I duplicate, or copy a Core Data Managed Object?

    - by 106480833665852483906
    I have a managed object ("A") that contains various attributes and types of relationships, and its relationships also have their own attributes & relationships. What I would like to do is to "copy" or "duplicate" the entire object graph rooted at object "A", and thus creating a new object "B" that is very similar to "A". To be more specific, none of the relationships contained by "B" (or its children) should point to objects related to "A". There should be an entirely new object graph with similar relationships intact, and all objects having the same attributes, but of course different id's. There is the obvious manual way to do this, but I was hoping to learn of a simpler means of doing so which was not totally apparent from the Core Data documentation. TIA!

    Read the article

  • Generate page dynamically & insert body into another page - is this realistic?

    - by DaveDev
    I have the requirement to create a page which contains a graph at the top, and for each item in the graph there's a fact sheet below. I already produce the fact sheets as stand-alone pages. Now, rather than recreating the fact sheet to include in the page I have to create, I'd like to use the work that already exists. Is it realistic that I dynamically generate each fact sheet as needed, strip out the body and insert that into the new page? If so, does anyone have any pointers or suggestions? Thanks

    Read the article

  • Haskell - generating all paths between nodes

    - by user1460863
    I need to build a function, which return all paths between certain nodes. connect :: Int -> Int-> [[(Int,Int)]] Data.Graph library gives me usefull function 'buildG' which builds graph for me. If I call let g = buildG (1,5) [(1,2),(2,3),(3,4),(4,5),(2,5)], I will get an array where every node is mapped to his neighbours. An example: g!1 = [2] g!2 = [3,5] .. g!5 = [] I was trying to do it using list comprehensions, but I am not very good in haskell and I am getting typing error which I can't repair. connect x y g | x == y = [] | otherwise = [(x,z) | z <- (g!x), connect z y g] I don't need to worry at this moment about cycles. Here is what I want to get: connect 1 5 g = [[(1,2),(2,3),(3,4),(4,5)],[(1,2),(2,5)]]

    Read the article

  • Inventory Management OOP design

    - by rgamber
    This was an OOP design and implementation interview question, which I came across on glassdoor.com. Design and implement a inventory management system to minimize the number of missed delivery dates while keeping costs to the company low. Of course there is no right answer to this, but I am not sure I understand the question correctly and am wondering what would be a good answer. Is this as simple as creating an undirected graph with nodes as the delivery points, and edges having weights as the cost of the delivery, and then use a single-source-shortest-path algorithm (like Dijkstras, or Bellman-Ford) on the graph? Not sure if this type of question should be asked here,so let me know and I will delete it.

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Building an HTML5 App with ASP.NET

    - by Stephen Walther
    I’m teaching several JavaScript and ASP.NET workshops over the next couple of months (thanks everyone!) and I thought it would be useful for my students to have a really easy to use JavaScript reference. I wanted a simple interactive JavaScript reference and I could not find one so I decided to put together one of my own. I decided to use the latest features of JavaScript, HTML5 and jQuery such as local storage, offline manifests, and jQuery templates. What could be more appropriate than building a JavaScript Reference with JavaScript? You can try out the application by visiting: http://Superexpert.com/JavaScriptReference Because the app takes advantage of several advanced features of HTML5, it won’t work with Internet Explorer 6 (but really, you should stop using that browser). I have tested it with IE 8, Chrome 8, Firefox 3.6, and Safari 5. You can download the source for the JavaScript Reference application at the end of this article. Superexpert JavaScript Reference Let me provide you with a brief walkthrough of the app. When you first open the application, you see the following lookup screen: As you type the name of something from the JavaScript language, matching results are displayed: You can click the details link for any entry to view details for an entry in a modal dialog: Alternatively, you can click on any of the tabs -- Objects, Functions, Properties, Statements, Operators, Comments, or Directives -- to filter results by type of syntax. For example, you might want to see a list of all JavaScript built-in objects: You can login to the application to make modification to the application: After you login, you can add, update, or delete entries in the reference database: HTML5 Local Storage The application takes advantage of HTML5 local storage to store all of the reference entries on the local browser. IE 8, Chrome 8, Firefox 3.6, and Safari 5 all support local storage. When you open the application for the first time, all of the reference entries are transferred to the browser. The data is stored persistently. Even if you shutdown your computer and return to the application many days later, the data does not need to be transferred again. Whenever you open the application, the app checks with the server to see if any of the entries have been updated on the server. If there have been updates, then only the updates are transferred to the browser and the updates are merged with the existing entries in local storage. After the reference database has been transferred to your browser once, only changes are transferred in the future. You get two benefits from using local storage. First, the application loads very fast and works very fast after the data has been loaded once. The application does not query the server whenever you filter or view entries. All of the data is persisted in the browser. Second, you can browse the JavaScript reference even when you are not connected to the Internet (when you are on the proverbial airplane). The JavaScript Reference works as an offline application for browsers that support offline applications (unfortunately, not IE). When using Google Chrome, you can easily view the contents of local storage by selecting Tools, Developer Tools (CTRL-SHIFT I) and selecting Storage, Local Storage: The JavaScript Reference app stores two items in local storage: entriesLastUpdated and entries. HTML5 Offline App For browsers that support HTML5 offline applications – Chrome 8 and Firefox 3.6 but not Internet Explorer – you do not need to be connected to the Internet to use the JavaScript Reference. The JavaScript Reference can execute entirely on your machine just like any other desktop application. When you first open the application with Firefox, you are presented with the following warning: Notice the notification bar that asks whether you want to accept offline content. If you click the Allow button then all of the files (generated ASPX, images, CSS, JavaScript) needed for the JavaScript Reference will be stored on your local computer. Automatic Script Minification and Combination All of the custom JavaScript files are combined and minified automatically whenever the application is built with Visual Studio. All of the custom scripts are contained in a folder named App_Scripts: When you perform a build, the combine.js and combine.debug.js files are generated. The Combine.config file contains the list of files that should be combined (importantly, it specifies the order in which the files should be combined). Here’s the contents of the Combine.config file:   <?xml version="1.0"?> <combine> <scripts> <file path="compat.js" /> <file path="storage.js" /> <file path="serverData.js" /> <file path="entriesHelper.js" /> <file path="authentication.js" /> <file path="default.js" /> </scripts> </combine>   jQuery and jQuery UI The JavaScript Reference application takes heavy advantage of jQuery and jQuery UI. In particular, the application uses jQuery templates to format and display the reference entries. Each of the separate templates is stored in a separate ASP.NET user control in a folder named Templates: The contents of the user controls (and therefore the templates) are combined in the default.aspx page: <!-- Templates --> <user:EntryTemplate runat="server" /> <user:EntryDetailsTemplate runat="server" /> <user:BrowsersTemplate runat="server" /> <user:EditEntryTemplate runat="server" /> <user:EntryDetailsCloudTemplate runat="server" /> When the default.aspx page is requested, all of the templates are retrieved in a single page. WCF Data Services The JavaScript Reference application uses WCF Data Services to retrieve and modify database data. The application exposes a server-side WCF Data Service named EntryService.svc that supports querying, adding, updating, and deleting entries. jQuery Ajax calls are made against the WCF Data Service to perform the database operations from the browser. The OData protocol makes this easy. Authentication is handled on the server with a ChangeInterceptor. Only authenticated users are allowed to update the JavaScript Reference entry database. JavaScript Unit Tests In order to build the JavaScript Reference application, I depended on JavaScript unit tests. I needed the unit tests, in particular, to write the JavaScript merge functions which merge entry change sets from the server with existing entries in browser local storage. In order for unit tests to be useful, they need to run fast. I ran my unit tests after each build. For this reason, I did not want to run the unit tests within the context of a browser. Instead, I ran the unit tests using server-side JavaScript (the Microsoft Script Control). The source code that you can download at the end of this blog entry includes a project named JavaScriptReference.UnitTests that contains all of the JavaScripts unit tests. JavaScript Integration Tests Because not every feature of an application can be tested by unit tests, the JavaScript Reference application also includes integration tests. I wrote the integration tests using Selenium RC in combination with ASP.NET Unit Tests. The Selenium tests run against all of the target browsers for the JavaScript Reference application: IE 8, Chrome 8, Firefox 3.6, and Safari 5. For example, here is the Selenium test that checks whether authenticating with a valid user name and password correctly switches the application to Admin Mode: [TestMethod] [HostType("ASP.NET")] [UrlToTest("http://localhost:26303/JavaScriptReference")] [AspNetDevelopmentServerHost(@"C:\Users\Stephen\Documents\Repos\JavaScriptReference\JavaScriptReference\JavaScriptReference", "/JavaScriptReference")] public void TestValidLogin() { // Run test for each controller foreach (var controller in this.Controllers) { var selenium = controller.Value; var browserName = controller.Key; // Open reference page. selenium.Open("http://localhost:26303/JavaScriptReference/default.aspx"); // Click login button displays login form selenium.Click("btnLogin"); Assert.IsTrue(selenium.IsVisible("loginForm"), "Login form appears after clicking btnLogin"); // Enter user name and password selenium.Type("userName", "Admin"); selenium.Type("password", "secret"); selenium.Click("btnDoLogin"); // Should set adminMode == true selenium.WaitForCondition("selenium.browserbot.getCurrentWindow().adminMode==true", "30000"); } }   The results for running the Selenium tests appear in the Test Results window just like the unit tests: The Selenium tests take much longer to execute than the unit tests. However, they provide test coverage for actual browsers. Furthermore, if you are using Visual Studio ALM, you can run the tests automatically every night as part of your standard nightly build. You can view the Selenium tests by opening the JavaScriptReference.QATests project. Summary I plan to write more detailed blog entries about this application over the next week. I want to discuss each of the features – HTML5 local storage, HTML5 offline apps, jQuery templates, automatic script combining and minification, JavaScript unit tests, Selenium tests -- in more detail. You can download the source control for the JavaScript Reference Application by clicking the following link: Download You need Visual Studio 2010 and ASP.NET 4 to build the application. Before running the JavaScript unit tests, install the Microsoft Script Control. Before running the Selenium tests, start the Selenium server by running the StartSeleniumServer.bat file located in the JavaScriptReference.QATests project.

    Read the article

  • Kill your temp tables using keyboard shortcuts : SSMS

    - by jamiet
    Here’s a nifty little SSMS trick that my colleague Tom Hunter educated me on the other day and I thought it was worth sharing. If you’re a keyboard shortcut junkie then you’ll love it. How often when working with code in SSMS that contains temp tables do you see the following message: Msg 2714, Level 16, State 6, Line 78 There is already an object named '#table' in the database. Quite often I would imagine, it happens to me all the time! Usually I write a bit of code at the top of the query window that goes and drops the table if it exists but there’s a much easier way of dealing with it. Remember that temp tables disappear as soon as your sessions ends hence wouldn’t it be nice if there were a quick way of recycling (i.e. stopping and restarting) your session? Well turns out there is and all it takes is a sequence of 4 keystrokes: Bring up the context menu using that mythically-named button that usually sits 3 to the right of the space bar ‘C’ for “Connection” ‘H’ for “Change Connection…” ‘Enter’ to select the same connection you had open last time (screenshots below) Once you’ve done it a few times you’ll probably have the whole sequence down to less than a second. Such a simple little trick, I’m annoyed with myself for it not occurring to me before! The only caveat is that you’ll need a “USE <database>” directive at the top of your query window but I don’t think that’s much of a bind! That is all other than to say if you like little SSMS titbits like this then Lee Everest’s blog is a good one to keep an eye on! @jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Front-end structure of large scale Django project

    - by Saike
    Few days ago, I started to work in new company. Before me, all front-end and backend code was written by one man (oh my...). As you know, Django app contains two main directories for front-end: /static - for static(public) files and /templates - for django templates Now, we have large application with more than 10 different modules like: home, admin, spanel, mobile etc. This is current structure of files and directories: FIRST - /static directory. As u can see, it is mixed directories with some named like modules, some contains global libs. one more: SECOND - /templates directory. Some directories named like module with mixed templates, some depends on new version =), some used only in module, but placed globally. and more: I think, that this is ugly, non-maintable, put-in-stress structure! After some time spend, i suggest to use this scheme, that based on module-structure. At first, we have version directories, used for save full project backup, includes: /DEPRECATED directory - for old, unused files and /CURRENT (Active) directory, that contains production version of project. I think it's right, because we can access to older or newer version files fast and easy. Also, we are saved from broken or wrong dependencies between different versions. Second, in every version we have standalone modules and global module. Every module contains own /static and /templates directories. This structure used to avoid broken or wrong dependencies between different modules, because every module has own js app, css tables and local images. Global module contains all libraries, main stylesheets and images like logos or favicon. I think, this structure is much better to maintain, update, refactoring etc. My question is: How do you think, is this scheme better than current? Can this scheme live, or it is not possible to implement this in Django app?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >