Search Results

Search found 10006 results on 401 pages for 'symbol tables'.

Page 82/401 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Database Table Prefixes

    - by DoctorMick
    We're having a few discussions at work around the naming of our database tables. We're working on a large application with approx 100 database tables (ok, so it isn't that large), most of which can be categorized in to different functional area, and we're trying to work out the best way of naming/organizing these within an Oracle database. The three current options are: Create the different functional areas in separate schemas. Create everything in the same schema but prefix the tables with the functional area Create everything in the same schema with no prefixes We have various pro's and con's around each one but I'd be interested to hear everyone's opinions on what the best solution is.

    Read the article

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

  • CakePHP Missing Database Table Error

    - by BRADINO
    I am baking a new project management application at work and added a couple new tables to the database today. When I went into the console to bake the new models, they were not in the list... php /path/cake/console/cake.php bake all -app /path/app/ So I manually typed in the model name and I got a missing database table for model error. I checked and double-checked and the database table was named properly. Turns out that some files inside the /app/tmp/cache/ folder were causing Cake not to recognize that I had added new tables to my database. Once I deleted the cache files cake instantly recognized my new database tables and I was baking away! rm -Rf /path/app/tmp/cache/cake*

    Read the article

  • How to structure classes in the filesystem?

    - by da_b0uncer
    I have a few (view) classes. Table, Tree, PagingColumn, SelectionColumn, SparkLineColumn, TimeColumn. currently they're flat under app/view like this: app/view/Table app/view/Tree app/view/PagingColumn ... I thought about restructuring it, because the Trees and Tables use the columns, but there are some columns, which only work in a tree, some who work in trees and tables and in the future there are probably some who only work in tables, I don't know. My first idea was like this: app/view/Table app/view/Tree app/view/column/PagingColumn app/view/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn But since the SelectionColumn is explicitly for trees, I have the fear that future developers could get the idea of missuse them. But how to restructure it probably? Like this: app/view/table/panel/Table app/view/tree/panel/Tree app/view/tree/column/PagingColumn app/view/tree/column/SelectionColumn app/view/column/SparkLineColumn app/view/column/TimeColumn Or like this: app/view/Table app/view/Tree app/view/column/SparkLineColumn app/view/column/TimeColumn app/view/column/tree/PagingColumn app/view/column/tree/SelectionColumn

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • ASP.NET C# Session Variable

    - by SAMIR BHOGAYTA
    You can make changes in the web.config. You can give the location path i.e the pages to whom u want to apply the security. Ex. 1) In first case the page can be accessed by everyone. // Allow ALL users to visit the CreatingUserAccounts.aspx // location path="CreatingUserAccounts.aspx" system.web authorization allow users="*" / /authorization /system.web /location 2) in this case only admin can access the page // Allow ADMIN users to visit the hello.aspx location path="hello.aspx" system.web authorization allow roles="ADMIN' / deny users="*" / /authorization /system.web /location OR On the every page you need to check the authorization according to the page logic ex: On every page call this if (session[loggeduser] !=null) { DataSet dsUser=(DataSet)session[loggeduser]; if (dsUser !=null && dsUser.Tables.Count0 && dsUser.Tables[0] !=null && dsUser.Tables[0].Rows.Count0) { if (dsUser.Table[0].Rows[0]["UserType"]=="SuperAdmin") { //your page logic here } if (dsUser.Table[0].Rows[0]["UserType"]=="Admin") { //your page logic here } } }

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2

    - by pinaldave
    Social media has created an Always Connected World for us. Recently I enrolled myself to learn new technologies as a student. I had decided to focus on learning and decided not to stay connected on the internet while I am in the learning session. On the second day of the event after the learning was over, I noticed lots of notification from my friend on my various social media handle. He had connected with me on Twitter, Facebook, Google+, LinkedIn, YouTube as well SMS, WhatsApp on the phone, Skype messages and not to forget with a few emails. I right away called him up. The problem was very unique – let us hear the problem in his own words. “Pinal – we are in big trouble we are not able to figure out what is going on. Our product details table is continuously loosing rows. Lots of rows have disappeared since morning and we are unable to find why the rows are getting deleted. We have made sure that there is no DELETE command executed on the table as well. The matter of the fact, we have removed every single place the code which is referencing the table. We have done so many crazy things out of desperation but no luck. The rows are continuously deleted in a random pattern. Do you think we have problems with intrusion or virus?” After describing the problems he had pasted few rants about why I was not available during the day. I think it will be not smart to post those exact words here (due to many reasons). Well, my immediate reaction was to get online with him. His problem was unique to him and his team was all out to fix the issue since morning. As he said he has done quite a lot out in desperation. I started asking questions from audit, policy management and profiling the data. Very soon I realize that I think this problem was not as advanced as it looked. There was no intrusion, SQL Injection or virus issue. Well, long story short first - It was a very simple issue of foreign key created with ON UPDATE CASCADE and ON DELETE CASCADE.  CASCADE allows deletions or updates of key values to cascade through the tables defined to have foreign key relationships that can be traced back to the table on which the modification is performed. ON DELETE CASCADE specifies that if an attempt is made to delete a row with a key referenced by foreign keys in existing rows in other tables, all rows containing those foreign keys are also deleted. ON UPDATE CASCADE specifies that if an attempt is made to update a key value in a row, where the key value is referenced by foreign keys in existing rows in other tables, all of the foreign key values are also updated to the new value specified for the key. (Reference: BOL) In simple words – due to ON DELETE CASCASE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. In my friend’s case, they had two tables, Products and ProductDetails. They had created foreign key referential integrity of the product id between the table. Now the as fall was up they were updating their catalogue. When they were updating the catalogue they were deleting products which are no more available. As the changes were cascading the corresponding rows were also deleted from another table. This is CORRECT. The matter of the fact, there is no error or anything and SQL Server is behaving how it should be behaving. The problem was in the understanding and inappropriate implementations of business logic.  What they needed was Product Master Table, Current Product Catalogue, and Product Order Details History tables. However, they were using only two tables and without proper understanding the relation between them was build using foreign keys. If there were only two table, they should have used soft delete which will not actually delete the record but just hide it from the original product table. This workaround could have got them saved from cascading delete issues. I will be writing a detailed post on the design implications etc in my future post as in above three lines I cannot cover every issue related to designing and it is also not the scope of the blog post. More about designing in future blog posts. Once they learn their mistake, they were happy as there was no intrusion but trust me sometime we are our own enemy and this is a great example of it. In tomorrow’s blog post we will go over their code and workarounds. Feel free to share your opinions, experiences and comments. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is Query Performance different for different versions of SQL Server?

    - by Ronak Mathia
    I have fired 3 update queries in my stored procedure for 3 different tables. Each table contains almost 2,00,000 records and all records have to be updated. I am using indexing to speed up the performance. It quite working well with SQL Server 2008. stored procedure takes only 12 to 15 minutes to execute. (updates almost 1000 rows in 1 second in all three tables) But when I run same scenario with SQL Server 2008 R2 then stored procedure takes more time to complete execution. its about 55 to 60 minutes. (updates almost 100 rows in 1 second in all three tables). I couldn't find any reason or solution for that. I have also tested same scenario with SQL Server 2012. but result is same as above. Please give suggestions.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • JavaOne 2012: JDBC Community Discussion

    - by sowmya
    At JavaOne2012, Mark Biamonte of DataDirect Technologies and Lance Andersen of Oracle organized a discussion about JDBC. To learn more about using JDBC to develop database applications, see the JDBC trail in the Java Tutorials. You will know how to use the basic JDBC API to * create tables * insert values into them * query the tables * retrieve the results of the queries * update the tables In this process, you will learn how to use simple statements and prepared statements, and see an example of a stored procedure. You will also learn how to perform transactions and how to catch exceptions and warnings. - Sowmya

    Read the article

  • Setting Up and Running Summary Advisor on an Exalytics Machine (Oracle-by-Example)

    - by Saresh
    If you are running Oracle BI on an Exalytics machine, you can use Summary Advisor to identify the aggregates that will increase query performance. Summary Advisor intelligently recommends an optimal list of aggregate tables based on query patterns that will achieve maximum query performance gain while meeting specific resource constraints. Summary Advisor then generates an aggregate creation script that can be run to create the recommended aggregate tables. Aggregate tables reduce query times by storing precomputed results for queries that include rolled-up data. This tutorial covers steps to set up, configure, and run Summary Advisor on an Exalytics machine using TimesTen database as a target for storing aggregates. You can find the Oracle By Example (OBE) in the Oracle Learning Library (OLL). The content in OLL is available to all customers, partners, and employees.

    Read the article

  • Organizing your Data Access Layer

    - by nighthawk457
    I am using Entity Framework as my ORM in an ASP.Net application. I have my database already created so ended up generating the entity model from it. What is a good way to organize files/classes in the data access layer. My entity framework model is in a class library and I was planning on adding additional classes per Entity(i.e per database table) and putting all the queries related to those tables in their respective classes. I am not sure if this is a right approach and if it is then where do the queries requiring data from multiple tables go? Am I completely wrong in organizing my files based on entities/tables and should I organize them based on functional areas instead.

    Read the article

  • java.lang.IllegalAccessException during Ant jwsc webservice build

    - by KevB
    Hi. I have a large application, part of which relies on a set of 3 webservices. I'm currently in the process of writing an Ant build script to build and package the application into an EAR file. When building the web sub-project for this application I use the <jwsc> task in Ant to compile the webservices. This causes an IllegalAccessException, as outlined in the stack trace below: [jwsc] warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds [jwsc] JWS: processing module weboutput [jwsc] Parsing source files [jwsc] Parsing source files [jwsc] 3 JWS files being processed for module weboutput [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\DailyRun.java Validated. [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\PendingRegistrationsSweep.java Validated. [jwsc] JWS: C:\dev\ir\irWeb\src\webservices\RegistrationsGoLive.java Validated. [jwsc] Compiling 6 source files to C:\DOCUME~1\KEVIN~1.BRE\LOCALS~1\Temp\_5l950r [jwsc] An exception has occurred in the compiler (1.6.0_23). Please file a bug at the Java Developer Connection (http://java.sun.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you. [jwsc] java.lang.IllegalAccessError: tried to access class com.sun.tools.javac.jvm.ClassReader$AnnotationDefaultCompleter from class com.sun.tools.javac.jvm.ClassReader [jwsc] at com.sun.tools.javac.jvm.ClassReader.attachAnnotationDefault(ClassReader.java:1128) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMemberAttr(ClassReader.java:906) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMemberAttrs(ClassReader.java:1027) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readMethod(ClassReader.java:1490) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readClass(ClassReader.java:1586) [jwsc] at com.sun.tools.javac.jvm.ClassReader.readClassFile(ClassReader.java:1658) [jwsc] at com.sun.tools.javac.jvm.ClassReader.fillIn(ClassReader.java:1845) [jwsc] at com.sun.tools.javac.jvm.ClassReader.complete(ClassReader.java:1777) [jwsc] at com.sun.tools.javac.code.Symbol.complete(Symbol.java:386) [jwsc] at com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:763) [jwsc] at com.sun.tools.javac.jvm.ClassReader.loadClass(ClassReader.java:1951) [jwsc] at com.sun.tools.javac.comp.Resolve.loadClass(Resolve.java:842) [jwsc] at com.sun.tools.javac.comp.Resolve.findIdentInPackage(Resolve.java:1011) [jwsc] at com.sun.tools.javac.comp.Attr.selectSym(Attr.java:1921) [jwsc] at com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:1835) [jwsc] at com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:1522) [jwsc] at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:360) [jwsc] at com.sun.tools.javac.comp.Attr.attribType(Attr.java:390) [jwsc] at com.sun.tools.javac.comp.MemberEnter.attribImportType(MemberEnter.java:681) [jwsc] at com.sun.tools.javac.comp.MemberEnter.visitImport(MemberEnter.java:545) [jwsc] at com.sun.tools.javac.tree.JCTree$JCImport.accept(JCTree.java:495) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:387) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:399) [jwsc] at com.sun.tools.javac.comp.MemberEnter.visitTopLevel(MemberEnter.java:512) [jwsc] at com.sun.tools.javac.tree.JCTree$JCCompilationUnit.accept(JCTree.java:446) [jwsc] at com.sun.tools.javac.comp.MemberEnter.memberEnter(MemberEnter.java:387) [jwsc] at com.sun.tools.javac.comp.MemberEnter.complete(MemberEnter.java:819) [jwsc] at com.sun.tools.javac.code.Symbol.complete(Symbol.java:386) [jwsc] at com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:763) [jwsc] at com.sun.tools.javac.comp.Enter.complete(Enter.java:464) [jwsc] at com.sun.tools.javac.comp.Enter.main(Enter.java:442) [jwsc] at com.sun.tools.javac.main.JavaCompiler.enterTrees(JavaCompiler.java:819) [jwsc] at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:727) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:353) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:279) [jwsc] at com.sun.tools.javac.main.Main.compile(Main.java:270) [jwsc] at com.sun.tools.javac.Main.compile(Main.java:69) [jwsc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [jwsc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.taskdefs.compilers.Javac13.execute(Javac13.java:56) [jwsc] at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1097) [jwsc] at weblogic.wsee.tools.anttasks.DelegatingJavacTask$ExposingJavac.compile(DelegatingJavacTask.java:343) [jwsc] at weblogic.wsee.tools.anttasks.DelegatingJavacTask.compile(DelegatingJavacTask.java:286) [jwsc] at weblogic.wsee.tools.anttasks.JwscTask.javac(JwscTask.java:335) [jwsc] at weblogic.wsee.tools.anttasks.JwsModule.compile(JwsModule.java:390) [jwsc] at weblogic.wsee.tools.anttasks.JwsModule.build(JwsModule.java:262) [jwsc] at weblogic.wsee.tools.anttasks.JwscTask.execute(JwscTask.java:227) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442) [jwsc] at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at com.bea.workshop.cmdline.antlib.AntExTask.execute(AntExTask.java:406) [jwsc] at com.bea.workshop.cmdline.antlib.AntCallExTask.execute(AntCallExTask.java:118) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at com.bea.workshop.cmdline.antlib.AntExTask.execute(AntExTask.java:406) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:217) [jwsc] at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:197) [jwsc] at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) [jwsc] at net.sf.antcontrib.logic.ForTask.doSequentialIteration(ForTask.java:259) [jwsc] at net.sf.antcontrib.logic.ForTask.doToken(ForTask.java:268) [jwsc] at net.sf.antcontrib.logic.ForTask.doTheTasks(ForTask.java:299) [jwsc] at net.sf.antcontrib.logic.ForTask.execute(ForTask.java:244) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442) [jwsc] at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105) [jwsc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) [jwsc] at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) [jwsc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [jwsc] at java.lang.reflect.Method.invoke(Method.java:597) [jwsc] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [jwsc] at org.apache.tools.ant.Task.perform(Task.java:348) [jwsc] at org.apache.tools.ant.Target.execute(Target.java:390) [jwsc] at org.apache.tools.ant.Target.performTasks(Target.java:411) [jwsc] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1397) [jwsc] at org.apache.tools.ant.Project.executeTarget(Project.java:1366) [jwsc] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [jwsc] at org.apache.tools.ant.Project.executeTargets(Project.java:1249) [jwsc] at org.apache.tools.ant.Main.runBuild(Main.java:801) [jwsc] at org.apache.tools.ant.Main.startAnt(Main.java:218) [jwsc] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:280) [jwsc] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:109) [AntUtil.deleteDir] Deleting directory C:\DOCUME~1\KEVIN~1.BRE\LOCALS~1\Temp_5l950r The Ant target that uses the <jwsc> task is this: <target name="webservice.build" depends="init,generated.root.init"> <path id="jwsc.srcpath"> <path path="${java.sourcepath}" /> <pathelement path="build/assembly/.src" /> </path> <taskdef name="jwsc" classname="weblogic.wsee.tools.anttasks.JwscTask" > <classpath> <path refid="weblogic.jar.classpath" /> </classpath> </taskdef> <property name="jwsc.module.root" value="${project.dir}/build/weboutput"/> <property name="jwsc.contextpath" value="irWeb"/> <property name="jwsc.srcpath.prop" refid="jwsc.srcpath"/> <path id="jwsc.classpath"> <path refid="weblogic.jar.classpath" /> <path refid="java.classpath" /> <pathelement path="${java.outpath}" /> </path> <jwsc destdir="${project.dir}/build" classpathref="jwsc.classpath"> <module name="weboutput" explode="true" contextPath="${jwsc.contextpath}" > <jwsFileSet srcdir="${webservices.dir}" type="JAXRPC"> <include name="**/*.java"/> </jwsFileSet> <descriptor file="${jwsc.module.root}/WEB-INF/web.xml" /> <descriptor file="${jwsc.module.root}/WEB-INF/weblogic.xml" /> </module> </jwsc> </target> I have no idea what could be causing the compiler to throw this error at build time, and a day of google searching has turned up other instances of this error caused by different triggers, and solutions for those propblems didn't work for me. I also found a single report on the Oracle forums that seemed to be a carbon copy of this issue, but there were no replies. The application is written in Weblogic Workshop 10, runs on Weblogic Server 10.3, and uses Beehive / NetUI. Not sure if that would make a difference or not though. The build scripts were automatically generated by Weblogic Workshop, with some tweaks and fixes made to other aspects of the files by myself to fix other compatability issues. I am using Java 1.6.0_23 from Sun, and Ant 1.8.1 Any help or advice would be greatly appreciated.

    Read the article

  • Using sqlite from vala without dependence on glib

    - by celil
    I need to use the Sqlite vapi without any depedence on GLib. SQlite is non-gobject library, so it should be possible to do that. However, when I try to compile the following file with the --profile posix option, using Sqlite; void main() { stdout.printf("Hello, World!"); } I get am error messages: sqlite3.vapi:357.56-357.59: error: The symbol `GLib' could not be found public int bind_blob (int index, void* value, int n, GLib.DestroyNotify destroy_notify); ^^^^ sqlite3.vapi:362.68-362.71: error: The symbol `GLib' could not be found public int bind_text (int index, owned string value, int n = -1, GLib.DestroyNotify destroy_notify = GLib.g_free); ^^^^ sqlite3.vapi:411.42-411.45: error: The symbol `GLib' could not be found public void result_blob (uint8[] data, GLib.DestroyNotify? destroy_notify = GLib.g_free); ^^^^ sqlite3.vapi:420.59-420.62: error: The symbol `GLib' could not be found public void result_text (string value, int length = -1, GLib.DestroyNotify? destroy_notify = GLib.g_free); ^^^^ Compilation failed: 4 error(s), 0 warning(s) It seems that several of the functions defined in the sqlite vapi make references to the GLib.g_free and GLib.DestroyNotify symbols. Are there any posix alternatives to those?

    Read the article

  • Compiling Java code in terminal having a Jar in CLASSPATH

    - by Masi
    How can you compile the code using javac in a terminal by using google-collections in CLASSPATH? Example of code trying to compile using javac in a terminal (works in Eclipse) import com.google.common.collect.BiMap; import com.google.common.collect.HashBiMap; public class Locate { ... BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ... } Compiling in terminal src 288 % javac Locate.java Locate.java:14: package com.google.common.collect does not exist import com.google.common.collect.BiMap; ^ Locate.java:15: package com.google.common.collect does not exist import com.google.common.collect.HashBiMap; ^ Locate.java:153: cannot find symbol symbol : class BiMap location: class Locate BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ^ Locate.java:153: cannot find symbol symbol : variable HashBiMap location: class Locate BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ^ 4 errors My CLASSPATH src 289 % echo $CLASSPATH /u/1/bin/javaLibraries/google-collect-1.0.jar

    Read the article

  • Webservice error

    - by Parth
    i am new to webservices, I have created a basic stockmarket webservice, I have successfully created the server script for it and placed it in my server, Now I also creted a clent script and accessed it hruogh the same server.. Is it valid ? can boh files be accesed from the same server? or do I have to place them in different servers? If yes Then Y? If No then why do i get the blank page? I am using nusoap library for webservice. When I use my cleint script from my local machine I get these errors "Deprecated: Assigning the return value of new by reference is deprecated in D:\wamp\www\pranav_test\nusoap\lib\nusoap.php on line 6506 Fatal error: Class 'soapclient' not found in D:\wamp\www\pranav_test\stockclient.php on line 3" stockserver.php at server <?php function getStockQuote($symbol) { mysql_connect('localhost','root','******'); mysql_select_db('pranav_demo'); $query = "SELECT stock_price FROM stockprices " . "WHERE stock_symbol = '$symbol'"; $result = mysql_query($query); $row = mysql_fetch_assoc($result); return $row['stock_price']; } require('nusoap/lib/nusoap.php'); $server = new soap_server(); $server->configureWSDL('stockserver', 'urn:stockquote'); $server->register("getStockQuote", array('symbol' => 'xsd:string'), array('return' => 'xsd:decimal'), 'urn:stockquote', 'urn:stockquote#getStockQuote'); $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server->service($HTTP_RAW_POST_DATA); ?> stockclient.php <?php require_once('nusoap/lib/nusoap.php'); $c = new soapclient('http://192.168.1.20/pranav_test/stockserver.php'); $stockprice = $c->call('getStockQuote', array('symbol' => 'ABC')); echo "The stock price for 'ABC' is $stockprice."; ?> please help...

    Read the article

  • Using StringBuilder to process csv files to save heap space

    - by portoalet
    I am reading a csv file that has about has about 50,000 lines and 1.1MiB in size (and can grow larger). In Code1, I use String to process the csv, while in Code2 I use StringBuilder (only one thread executes the code, so no concurrency issues) Using StringBuilder makes the code a little bit harder to read that using normal String class. Am I prematurely optimizing things with StringBuilder in Code2 to save a bit of heap space and memory? Code1 fr = new FileReader(file); BufferedReader reader = new BufferedReader(fr); String line = reader.readLine(); while ( line != null ) { int separator = line.indexOf(','); String symbol = line.substring(0, seperator); int begin = separator; separator = line.indexOf(',', begin+1); String price = line.substring(begin+1, seperator); // Publish this update publisher.publishQuote(symbol, price); // Read the next line of fake update data line = reader.readLine(); } Code2 fr = new FileReader(file); StringBuilder stringBuilder = new StringBuilder(reader.readLine()); while( stringBuilder.toString() != null ) { int separator = stringBuilder.toString().indexOf(','); String symbol = stringBuilder.toString().substring(0, separator); int begin = separator; separator = stringBuilder.toString().indexOf(',', begin+1); String price = stringBuilder.toString().substring(begin+1, separator); publisher.publishQuote(symbol, price); stringBuilder.replace(0, stringBuilder.length(), reader.readLine()); }

    Read the article

  • How to deal with recursive dependencies between static libraries using the binutils linker?

    - by Jack Lloyd
    I'm porting an existing system from Windows to Linux. The build is structured with multiple static libraries. I ran into a linking error where a symbol (defined in libA) could not be found in an object from libB. The linker line looked like g++ test_obj.o -lA -lB -o test The problem of course being that by the time the linker finds it needs the symbol from libA, it has already passed it by, and does not rescan, so it simply errors out even though the symbol is there for the taking. My initial idea was of course to simply swap the link (to -lB -lA) so that libA is scanned afterwards, and any symbols missing from libB that are in libA are picked up. But then I find there is actually a recursive dependency between libA and libB! I'm assuming the Visual C++ linker handles this in some way (does it rescan by default?). Ways of dealing with this I've considered: Use shared objects. Unfortunately this is undesirable from the perspective of requiring PIC compliation (this is performance sensitive code and losing %ebx to hold the GOT would really hurt), and shared objects aren't needed. Build one mega ar of all of the objects, avoiding the problem. Restructure the code to avoid the recursive dependency (which is obviously the Right Thing to do, but I'm trying to do this port with minimal changes). Do you have other ideas to deal with this? Is there some way I can convince the binutils linker to perform rescans of libraries it has already looked at when it is missing a symbol?

    Read the article

  • Not getting response using SOAP and PHP.

    - by Nitish
    I'm using PHP5 and NuSOAP - SOAP Toolkit for PHP. I created the server using the code below: <?php function getStockQuote($symbol) { mysql_connect('localhost','user','pass'); mysql_select_db('test'); $query = "SELECT stock_price FROM stockprices WHERE stock_symbol = '$symbol'"; $result = mysql_query($query); $row = mysql_fetch_assoc($result); echo $row['stock_price']; } $a=require('lib/nusoap.php'); $server = new soap_server(); $server->configureWSDL('stockserver', 'urn:stockquote'); $server->register("getStockQuote", array('symbol' => 'xsd:string'), array('return' => 'xsd:decimal'), 'urn:stockquote', 'urn:stockquote#getStockQuote'); $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server->service($HTTP_RAW_POST_DATA); ?> The client has the following code: <?php require_once('lib/nusoap.php'); $c = new soapclientNusoap('http://localhost/stockserver.php?wsdl'); $stockprice = $c->call('getStockQuote', array('symbol' => 'ABC')); echo "The stock price for 'ABC' is $stockprice."; ?> The database was created using the code below: CREATE TABLE `stockprices` ( `stock_id` INT UNSIGNED NOT NULL AUTO_INCREMENT , `stock_symbol` CHAR( 3 ) NOT NULL , `stock_price` DECIMAL(8,2) NOT NULL , PRIMARY KEY ( `stock_id` ) ); INSERT INTO `stockprices` VALUES (1, 'ABC', '75.00'); INSERT INTO `stockprices` VALUES (2, 'DEF', '45.00'); INSERT INTO `stockprices` VALUES (3, 'GHI', '12.00'); INSERT INTO `stockprices` VALUES (4, 'JKL', '34.00'); When I run the client the result I get is this: The stock price for 'ABC' is . 75.00 is not being printed as the price.

    Read the article

  • Write a sql for updating data based on time

    - by Lu Lu
    Hello everyone, Because I am new with SQL Server and T-SQL, so I will need your help. I have 2 table: Realtime and EOD. To understand my question, I give example data for 2 tables: ---Realtime table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 // this day is not existed in EOD -> inserting BBC 1/3/2009 03:05:01 458 // this day is not existed in EOD -> inserting ABC 1/2/2009 03:05:01 326 // this day is new -> updating BBC 1/2/2009 03:05:01 454 // this day is new -> updating ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 ABC 1/2/2009 01:05:01 313 BBC 1/2/2009 01:05:01 423 ---EOD table--- Symbol Date Value ABC 1/2/2009 02:05:01 323 BBC 1/2/2009 02:05:01 453 I will need to create a store procedure to update value of symbols. If data in day of a symbol is new (compare between Realtime & EOD), it will update value and date for EOD at that day if existing, otherwise inserting. And store will update EOD table with new data: ---EOD table--- Symbol Date Value ABC 1/3/2009 03:05:01 327 BBC 1/3/2009 03:05:01 458 ABC 1/2/2009 03:05:01 326 BBC 1/2/2009 03:05:01 454 P/S: I use SQL Server 2005. And I have a similar answered question at here: http://stackoverflow.com/questions/2726369/help-to-the-way-to-write-a-query-for-the-requirement Please help me. Thanks.

    Read the article

  • Getting method arguments with Roslyn

    - by Kevin Burton
    I can get a list from the solution of all calls to a particuliar method using the following code: var createCommandList = new List<MethodSymbol>(); INamedTypeSymbol interfaceSymbol = (from p in solution.Projects select p.GetCompilation().GetTypeByMetadataName("BuySeasons.BsiServices.DataResource.IBsiDataConnection")).FirstOrDefault(); foreach (ISymbol symbol in interfaceSymbol.GetMembers("CreateCommand")) { if (symbol.Kind == CommonSymbolKind.Method && symbol is MethodSymbol) { createCommandList.Add(symbol as MethodSymbol); } } foreach (MethodSymbol methodSymbol in createCommandList) { foreach (ReferencedSymbol referenceSymbol in methodSymbol.FindReferences(solution)) { foreach (ReferenceLocation referenceLocation in from l in referenceSymbol.Locations orderby l.Document.FilePath select l) { if (referenceLocation.Location.GetLineSpan(false).StartLinePosition.Line == referenceLocation.Location.GetLineSpan(false).EndLinePosition.Line) { Debug.WriteLine(string.Format("{0} {1} at {2} {3}/{4} - {5}", methodSymbol.Name, "(" + string.Join(",", (from p in methodSymbol.Parameters select p.Type.Name + " " + p.Name).ToArray()) + ")", Path.GetFileName(referenceLocation.Location.GetLineSpan(false).Path), referenceLocation.Location.GetLineSpan(false).StartLinePosition.Line, referenceLocation.Location.GetLineSpan(false).StartLinePosition.Character, referenceLocation.Location.GetLineSpan(false).EndLinePosition.Character)); } else { throw new ApplicationException("Call spans multiple lines"); } } } } But this gives me a list of ReferencedSymbol's. Although this gives me the file and line number that the method is called from I would also like to get the specific arguments that the method is called with. How can I either convert what I have or get the same information with Roslyn? (notice the I first load the solution with the Solution.Load method and then loop through to find out where the method is defined/declared (createCommandList)). Thank you.

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • How do you perform arithmetic calculations on symbols in Scheme/Lisp?

    - by kunjaan
    I need to perform calculations with a symbol. I need to convert the time which is of hh:mm form to the minutes passed. ;; (get-minutes symbol)->number ;; convert the time in hh:mm to minutes ;; (get-minutes 6:19)-> 6* 60 + 19 (define (get-minutes time) (let* ((a-time (string->list (symbol->string time))) (hour (first a-time)) (minutes (third a-time))) (+ (* hour 60) minutes))) This is an incorrect code, I get a character after all that conversion and cannot perform a correct calculation. Do you guys have any suggestions? I cant change the input type. Context: The input is a flight schedule so I cannot alter the data structure. ;; ---------------------------------------------------------------------- Edit: Figured out an ugly solution. Please suggest something better. (define (get-minutes time) (let* ((a-time (symbol->string time)) (hour (string->number (substring a-time 0 1))) (minutes (string->number (substring a-time 2 4)))) (+ (* hour 60) minutes)))

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >