Search Results

Search found 33139 results on 1326 pages for 'embedded database'.

Page 626/1326 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Sharepoint - connectable receiving, XSLT editable web part?

    - by Corey O.
    You can use the "Data View" webpart to take data from a database call, then you can edit the XSLT manually to make it look and do whatever you want, within the scope of that data and XSLT capabilities. Is there a web part that allows me to do the same thing, but with data that is received by a connected webpart source rather than a database set? For example: I'd like to be able to pull in a Data View webpart that queries a bunch of data and makes it available all over the page. Then, I would like to hide that Data View. Once it is hidden, I'd like to be able to take another customizable webpart and pull a field (or multiple fields if possible) from the Data View webpart via a webpart connection. This would allow me to display various fields in creative formats without having to call the same query multiple times on the same page by different webparts. Is there an in house webpart that will allow me to do this?

    Read the article

  • JQuery Validation Plugin: Use Custom Ajax Method

    - by namtax
    Hi Looking for some assistance with the Jquery form validation plugin if possible. I am validating the email field of my form on blur by making an ajax call to my database, which checks if the text in the email field is currently in the database. // Check email validity on Blur $('#sEmail').blur(function(){ // Grab Email From Form var itemValue = $('#sEmail').val(); // Serialise data for ajax processing var emailData = { sEmail: itemValue } // Do Ajax Call $.getJSON('http://localhost:8501/ems/trunk/www/cfcs/admin_user_service.cfc?method=getAdminUserEmail&returnFormat=json&queryformat=column', emailData, function(data){ if (data != false) { var errorMessage = 'This email address has already been registered'; } else { var errorMessage = 'Good' } }) }); What I would like to do, is encorporate this call into the rules of my JQuery Validation Plugin...e.g $("#setAdminUser").validate({ rules:{ sEmail: { required: function(){ // Replicate my on blur ajax email call here } }, messages:{ sEmail: { required: "this email already exists" } }); Wondering if there is anyway of achieving this? Many thanks

    Read the article

  • VBScript & Access MDB - 800A0E7A - "Provider cannot be found. It may not be properly installed"

    - by Perma
    Hey gang, I've having a problem with a VBScript connecting to an access MDB Database. My platform is Vista64, but the majority of resources out there are for ASP/IIS7. Quite simply, I can't get it to connect. I'm getting the following error: 800A0E7A - "Provider cannot be found. It may not be properly installed" My code is: Set conn = CreateObject("ADODB.Connection") strConnect = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\database.MDB" conn.Open strConnect So far I have ran %WINDIR%\System32\odbcad32.exe to try to configure the Driver in 32bit mode, but it hasn't done the trick. Any suggestions would be greatly appreciated

    Read the article

  • SphinxSearch or a spider - which one to choose?

    - by r2b2
    Hello, here is my problem: We own SiteA and SiteB and they share the same server and database where we have full control. SiteC , siteD and siteE are some of the sites we own as well but reside on a different web hosts. The goal is to create a unified search functionality for all of the sites mentioned above. That is if somebody search for a term in SiteA, the search result will automatically come up with results from SiteB,SiteC,SiteD and Site E too. The search results should be shown under the website they were found in. All these websites content are stored in their own databases. If I use SphinxSearch to index the above sites,I would then require those sites that we dont have complete control with to setup a web service where i can download a database dump or csv file for indexing. Im not quite sure about how a sphider will come into play here so need your opinion. Sphinx or a spider? THanks!

    Read the article

  • Google Maps API v3: Can I setZoom after fitBounds?

    - by chris
    I have a set of points I want to plot on an embedded Google Map (API v3). I'd like the bounds to accommodate all points unless the zoom level is too low (i.e., zoomed out too much). My approach has been like this: var bounds = new google.maps.LatLngBounds(); // extend bounds with each point gmap.fitBounds(bounds); gmap.setZoom( Math.max(6, gmap.getZoom()) ); This doesn't work. The last line "gmap.setZoom()" doesn't change the zoom level of the map if called directly after fitBounds. Is there a way to get the zoom level of a bounds without applying it to the map? Other ideas to solve this?

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • eclipse jetty:run differs to cli mvn jetty:run

    - by adam
    I have managed to track a problem where within eclipse I see no error, but anywhere outside eclipse the error exists. The error I get when run outside eclipse is LazyInitializationException from hibernate - caused by a ${entity.service.id} reference in the jsp. Outside eclipse I have tried jetty standalone, mvn jetty:run from the command line, and tomcat. I've cleaned the projects, disabled workspace dependency etc. I'm using eclipse galileo, m2eclipse 0.9.8.200905041414, jdk 1.6_17, maven 2.1.1 (not embedded), jetty 6.1.22 (standalone and plugin). How is that possible??

    Read the article

  • C# where does the dbml file come from?

    - by 5YrsLaterDBA
    Learning C# and learing Linq now. have lots of qestions about it. Basically I need a step by step tutorial. I suppose the dbml file is the configuration file of the database. I double click it and VS will open it with kind of design diagram. I can create/delete/modify table here? I can use add new item to add the Linq to SQL Classes to get a dbml file? what's next? generate tables in database? generate sql script? generate cs files? when? how?

    Read the article

  • Rails - MS-SQL Server problems (unixODBC, FreeTDS) on Mac 10.6

    - by TMB
    Followed the instructions on the Rails wiki and have had success connecting to SQL Server 2000 with TSQL -- both with DSN-less and DNS connections. I'm running Mac OS X 10.6.3. Wiki instructions here. Installed ruby-odbc, dbi (0.4.0), dbd-odbc (2.4.5), activerecord-sqlserver-adapter (2.3.5). In my database.yml (Rails 2.3.6): development: adapter: sqlserver mode: ODBC dsn: 'DRIVER=/usr/local/lib/libtdsodbc.so;TDS_Version=8.0;SERVER=mssql01.discountasp.net;DATABASE=DB_164368_dmusd;Port=1433;uid=DB_164368_dmusd_user;pwd=Schools77;' This yields the following error: ODBC::Error: S1090 (0) [unixODBC][Driver Manager]Invalid string or buffer length When I attempt to use a DSN connection, I get the following error: ODBC::Error: IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified I have in fact verified that the FreeTDS driver (libtdsodbc.so) is installed and the path correct. Can anyone spot the error of my ways? Thanks in advance.

    Read the article

  • Device driver with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement driver for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our head office. Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Cheers, Tom

    Read the article

  • MySQL query returns different set of results on two identical databases

    - by 1nsane
    I exported a live MySQL database (running mysql 5.0.45) to a local copy (running mysql 5.1.33) with no errors upon import. There is a view in the database, that when executed locally, returns a different set of data than when executed remotely. It's returning 32 results instead of 63. When I execute the raw sql, the same problem occurs. I've inspected the data in all tables being joined, and the counts are the same. The query is simple and has no where conditions - but about 10 joins. Aside from the differences in mysql versions... I can't find any reason that this query would return different results between databases... since they are effectively exact copies. Has anyone experienced a problem like this before?

    Read the article

  • Why do I get this error when I try to push my SQLite3 to Postgresql (via Taps) on Cedar Stack?

    - by rhodee
    I've done quite a bit of research on Heroku Dev Center and I am now looking to the community for help. Here is my problem. I can not push my db to Heroku Cedar Stack. I am trying to migrate a sqlite database to postgresql via Taps gem. When I am ready to deploy I run: bundle install --without production heroku run db:push I get the following result: Running db:seed attached to terminal... up, run.17 sh: db:seed: not found heroku run rake db:migrate And when I run the migration: heroku run rake db:migrate I get the following: Running rake db:migrate attached to terminal... up, run.18 rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) /usr/local/lib/ruby/1.9.1/rake.rb:2367:in `raw_load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2007:in `block in load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:2058:in `standard_exception_handling' /usr/local/lib/ruby/1.9.1/rake.rb:2006:in `load_rakefile' /usr/local/lib/ruby/1.9.1/rake.rb:1991:in `run' /usr/local/bin/rake:31:in `<main>' Everytime I push to Heroku (git push heroku master) it fails because my gem file is attempting to install sqlite3 gem-even though its inside of the development and test groups in my Gemfile. My database.yml production environment still points to sqlite adapter even after I have run the following command successfully: heroku config:add BUNDLE_WITHOUT="test development" --app app_name_on_heroku Out of ideas. Please help. If its useful I can post results of my gemfile, heroku ps and logs. Cheers UPDATE: After following @John's direction I now receive the following terminal message. Sending schema Schema: 100% |==========================================| Time: 00:00:07 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 Sending data 4 tables, 6 records schema_migrat: 0% | | ETA: --:--:-- Saving session to push_201111070749.dat.. !!! Caught Server Exception HTTP CODE: 500 Taps Server Error: LoadError: no such file to load -- sequel/adapters/ And the following warnings: ["/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:in require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:inblock in tsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:72:in block in check_requiring_thread'", "<internal:prelude>:10:insynchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:69:in check_requiring_thread'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:249:intsk_require'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:25:in adapter_class'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/database/connecting.rb:54:inconnect'", "/app/.bundle/gems/ruby/1.9.1/gems/sequel-3.20.0/lib/sequel/core.rb:119:in connect'", "/app/lib/taps/db_session.rb:14:inconn'", "/app/lib/taps/server.rb:91:in block in <class:Server>'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in block in route'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:ininstance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in route_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500:inblock (2 levels) in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:inblock in route!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in each'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:inroute!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601:in dispatch!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:inblock in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in instance_eval'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:inblock in invoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:ininvoke'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in call!'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/auth/basic.rb:25:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:inblock in call'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in synchronize'", "/app/.bundle/gems/ruby/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:incall'", "/home/heroku_rack/lib/static_assets.rb:9:in call'", "/home/heroku_rack/lib/last_access.rb:15:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:47:in block in call'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:ineach'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/urlmap.rb:41:in call'", "/home/heroku_rack/lib/date_header.rb:14:incall'", "/app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.1/lib/rack/builder.rb:77:in call'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:76:inblock in pre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:in catch'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:74:inpre_process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:57:in process'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/connection.rb:42:inreceive_data'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in run_machine'", "/app/.bundle/gems/ruby/1.9.1/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:inrun'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/backends/base.rb:57:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/server.rb:156:instart'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/controllers/controller.rb:80:in start'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:177:inrun_command'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/lib/thin/runner.rb:143:in run!'", "/app/.bundle/gems/ruby/1.9.1/gems/thin-1.2.7/bin/thin:6:in'", "/usr/ruby1.9.2/bin/thin:19:in load'", "/usr/ruby1.9.2/bin/thin:19:in'"]

    Read the article

  • Django data migration when changing a field to ManyToMany

    - by Ken H
    I have a Django application in which I want to change a field from a ForeignKey to a ManyToManyField. I want to preserve my old data. What is the simplest/best process to follow for this? If it matters, I use sqlite3 as my database back-end. If my summary of the problem isn't clear, here is an example. Say I have two models: class Author(models.Model): author = models.CharField(max_length=100) class Book(models.Model): author = models.ForeignKey(Author) title = models.CharField(max_length=100) Say I have a lot of data in my database. Now, I want to change the Book model as follows: class Book(models.Model): author = models.ManyToManyField(Author) title = models.CharField(max_length=100) I don't want to "lose" all my prior data. What is the best/simplest way to accomplish this? Ken

    Read the article

  • Entity Framework and consuming a WCF service

    - by nakori
    I'm getting data where the database is hidden behind a WCF service. Is it possible to use Entity Framework in a scenario where I have custom objects coming from a web service? (No access to the external database, and no current plans for insert/update/delete logic) Starting with an empty EF model and adding an entity I get this error on compile: No mapping specified for instances of the EntitySet and AssociationSet in the EntityContainer .. Is it possible to make an entity this way, and fill it with data received from an object? (In this case a WCF, but could also be a predefined model class/xml data) If the web service retured a Customer object I could do something like this with a dataset: Make an unbound table and do a loop through the customer properties adding them to a temp row, add it with tbl_Customer.Addtbl_CustomerRow(customerRow) to get my view filled. thanks, nakori

    Read the article

  • Accessing original WordPress blog's DB from sub-blogs in network mode

    - by aendrew
    I'm helping with a university radio station website that runs WordPress and was recently switched over to Network (Multi-site/multi-user) mode by myself. The setup is as such: The parent site (www.stationID.com) runs a bunch of custom-built plugins to construct things like the show schedule calendar, the "Now Playing" widget, podcast list, et cetera. The new network websites ("wiki.stationID.com", "buddypress.stationID.com" for instance) run the same template as the parent site, but it stops after rendering the first section because the widgets from point 1 grab data from the main site's database that is not available to sub-blogs. My question is: how do I get data from the main site's tables on the sub-domain sub-blogs? A related question is: how do I set the $wpdb->prefix to be the same as the parent site on the child websites without it negatively effecting how the child website pulls data from its own database? Any help would be awesome, thanks!

    Read the article

  • Modal MessageBox for ASP.NET without jQuery

    - by Joe
    I have an assembly with custom ASP.NET server controls that is used in several, mostly in-house, ASP.NET 2.0 applications. The server controls use simple modal popup messageboxes, which are currently implemented using the javascript alert and confirm functions. I want to release a new version of this assembly that uses a better solution for messageboxes, including support for Yes/No buttons. The appearance would be something like a simplified version of the Ajax Control Toolkit ModalPopup extender (sample here). My constraints are that this should be as easy as possible to integrate into existing ASP.NET 2.0 applications without introducing new dependencies: ideally just drop in a new version of the assembly, that contains all the javascript it needs as an embedded resource, and possibly a CSS file. Because of this constraint, I am not considering solutions I've seen that use jquery, or the ASP.NET Ajax Control Toolkit, which appears to require adding elements to pages that use the extenders (ScriptManagers and the like). Any recommendations?

    Read the article

  • Auto increment column i JDO, GAE

    - by Viktor
    Hi, I have a data class with some fields, one is a URL that I consider the PK, if I add a new item (do a new sync) and save it it should overwrite the item in the database if it's the same URL. But I also need a "normal" Long id that is incremented for every object in the database and for this one I always get null unless I tags it as a PK, how can a get this incrementation but not have the column as my PK? @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) private Long _id; @Persistent private String _title; @PrimaryKey @Persistent private String _url; /Viktor

    Read the article

  • Eclipse Code Templates with Cobol

    - by Bruno Brant
    People, My team is just beginning to learn how to use COBOL on Eclipse (as part of the Rational Developer for System Z package) and one of our most desired features are code templates or code snippets. What we'd like to have is a code completion based on snippets just like we have on Java. For example, when I type try and hit ctrl-space Eclipse shows me a list of completion options, where one of those is create a try/catch block. Well, in COBOL one could leverage this when creating, for example, embedded SQL blocks, like EXEC SQL SELECT field, field, field, FROM table WHERE field = value, field = value END-EXEC. However, for some reason, it seems that Eclipse treats COBOL a little differently (no wonder why) from other languages. As such, when looking for the code templates in the preferences menu for COBOL, its appearance is very different from the Java one. The question is: how does one uses Eclipse's code templates with COBOL?

    Read the article

  • Error LNK1223 on ARM builds

    - by Seva Alekseyev
    eMbedded Visual C++ 3 project, building for PocketPC 2000. On the ARM build, the linker throws the following error: fatal error LNK1223: invalid or corrupt file: file contains invalid pdata contributions On SH3, the project compiles, links, and works. The project also works when built for ARM on Visual C++ 2005, but I need to test builds specifically from eVC3. Any ideas, please? What's a pdata contribution and how do I affect (or disable) those? It's something to do with exception handling; I've tried disabling SEH by specifying /EHsc, to no effect.

    Read the article

  • How can I use the SSRS ReportViewer from VS 2008 in a VS2010 project?

    - by Adrian Grigore
    Hi, I'm working on an ASP.NET MVC 2 / .NET 3.5 project which includes SSRS 2008 reports. After migrating to VS 2010 RC, the new web forms report viewer has been giving so much trouble that I'd like to use the old report viewer from VS 2008 again. Now I'm just wondering what would be the easiest way to do that. The report viewer is embedded in a Webforms ASPX file which is loaded in an IFrame by the the MVC view. Report parameters are currently stored as session variables, and for security reasons I would prefer not to change that for HTTP POST or GET parameters. So I can't just put the report viewer in a separate application and build that with VS2008. Moving the entire project back to VS 2008 is not an option. So, what's the easiest way for me to use the VS 2008 ReportViewer in VS 2010? Is there way to grab an assembly from VS 2008 and use that in my project? Thanks, Adrian

    Read the article

  • Visual C# 2008 Express connection to SQL Server 2008 Express problem

    - by Phil
    Hi guys, I have a problem with Visual C# 2008 express (SP1) connecting to SQL Server 2008 express. The "Add Connection" window (wherever initiated) doesn't list existing sql server and no option for sql server except a compact edition. Note that, I've got the VWD 2008 express (SP1) on the same machine which shows the window regularly (with SQL server listed) and SQL Server Management studio works fine with the server as well. I've seen other similar posts, did take some advices: reinstalled the VC#, services run ok, etc... but with no success with VC# so far. Again, on the same machine the VWD shows the dialog with sql server option regularly, but VC# shows only 3 options in "Change data source" dialog (1. Microsoft Access Database File (OLE DB) 2. Microsoft SQL Server Compact 3.5, 3. Microsoft SQL Server Database File) Any idea? Thanks in advice, Phil

    Read the article

  • SQL Server Full-Text Search: Hung processes with MSSEARCH wait type

    - by CheeseInPosition
    We have a SQL Server 2005 SP2 machine running a large number of databases, all of which contain full-text catalogs. Whenever we try to drop one of these databases or rebuild a full-text index, the drop or rebuild process hangs indefinitely with a MSSEARCH wait type. The process can’t be killed, and a server reboot is required to get things running again. Based on a Microsoft forums post[1], it appears that the problem might be an improperly removed full-text catalog. Can anyone recommend a way to determine which catalog is causing the problem, without having to remove all of them? [1] [http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2681739&SiteID=1] “Yes we did have full text catalogues in the database, but since I had disabled full text search for the database, and disabled msftesql, I didn't suspect them. I got however an article from Microsoft support, showing me how I could test for catalogues not properly removed. So I discovered that there still existed an old catalogue, which I ,after and only after re-enabling full text search, were able to delete, since then my backup has worked”

    Read the article

  • Show EPS file in IE8 with standards modes set to IE8

    - by runrunraygun
    <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE8"> With this set my EPS files are not rendered in ABCpdf, but set to EmulateIE7 and they work fine. I know EPS aren't a standard web format but they are embedded into a PDF using a bit of ABCpdf magic. Because IE8 isn't even trying to show them they are not appearing on the PDF as they previously were. <embed style="abcpdf-tag-visible: true;width:40px;height:40px;" id="C:\myfile.eps" src="myfile.eps">

    Read the article

  • SqlDateTime overflow on INSERT when date is correct using a Linq to SQL DataContext

    - by Jan Hoefnagels
    Dear Linq experts, I get an SqlDateTime overflow error (Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.) when doing an INSERT using an Linq DataContext connected to SQL Server database when I do the SubmitChanges(). When I use the debugger the date value is correct. Even if I temporary update the code to set the date value to DateTime.Now it will not do the insert. Did anybody found a work-around for this behaviour? Maybe there is a way to check what SQL the datacontext submits to the database.

    Read the article

  • ASP.NET Load unmanaged dll from bin folder

    - by Quandary
    Question: I use an embedded Firebird database in ASP.NET. Now, Firebird has a .NET wrapper around native dlls. The problem is, with the .NET compilation and execution process, the dlls get shadow copied to a temporary folder. Unfortunately, only the .NET dlls, and not the native dll. See http://msdn.microsoft.com/en-us/library/ms366723.aspx for details. Now, this makes it necessary to put the unmanaged dll somewhere into the system32 directory (or any other directory in the path environment variable). Now, I want to change the wrapper/native dll (opensource), so it loads the dll also if they are only in the bin folder. Now, my problem is, how can I, in .NET, load an unmanaged dll from an absolute path ? The absolute path is determined at runtime, not at compile-time...

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >