Search Results

Search found 15087 results on 604 pages for 'copy constructor'.

Page 516/604 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • How to accept confirmation Automatically in PowerShell for Outlook

    - by user2919845
    How to accept confirmation Automatically in PowerShell for Outlook I have script for Export attachments from email from Outlook - see next It works correctly on one PC, but on another PC is there a problem: Outlook gives message and wants answer: Permit Denny Help If I manually click on Permit or Denny it works correctly. I want to automate it. Can you give me some suggestion how to do it in PowerShell? I have tried to set Outlook to not give this message but I didn’t success. My script: # <-- Script ---------> # script works with outlook Inbox folder # check if email have attachments with ".txt" and save those attachments to $filepath # path for exported files - attachments $filepath = "d:\Exported_files\" # create object outlook $o = New-Object -comobject outlook.application $n = $o.GetNamespace("MAPI") # $f - folder „dorucena posta“ 6 - Inbox $f = $n.GetDefaultFolder(6) # 6 - Inbox # select newest 10 emails, from it olny this one with attachments $f.Items| select -last 10| Where {$_.Attachments}| foreach { # process only unreaded mail if($_.unread -eq $True) { # processed mail set as read, not to process this mail again next day $_.unread = $False $SenderName = $_.SenderName Write-Host "Email from: ", $SenderName # process all attachments $_.attachments|foreach { $a = $_.filename If ($a.Contains(".txt")) { Write-Host $SenderName," ", $a # copy *.txt attachments to folder $filepath $_.saveasfile((Join-Path $filepath "$a")) } } } } Write-Host "Finish" # <------ End Script ---------------------------------->

    Read the article

  • I need an IDE for typo3 core development in php

    - by Flugan
    Php in itself is difficult for IDEs because of the dynamic nature of the language. My current development environment is mostly netbeans against a local svn copy of the codebase setup in a local development webserver. The code is full text indexed by vistas search engine for almost instant searches. I do a lot of development directly against the main development server using a combination of tools. Putty to interact with the server and deploy by updating an svn checkout on the development server. Tortoise SVN locally to have a fairly rich SVN experience. Netbeans obviously have SVN integration. Most of the changes on the remote server is commited using the putty session. WinSCP to interact with the development server with norton commander like interface as well as the good putty integration. Finally my text editor for remote editing is notepad++ out of habit and because of some nice features and good price. What I'm really missing is good php editing. Because of the way typo3 works almost all objects are instanciated through make instance abstraction that either returns the base class or the customized class if the framework has been extended. I'm not looking for a magic editing package and would like to find an editor which can use annotations to specify the type of commonly used variables.

    Read the article

  • Can't find momd file: Core Data problems

    - by thekevinscott
    Aw geez! I screwed something up! I'm a Core Data noob, working on my first iOS app. After much Stack Overflowing I'm using this code: NSString *path = [[NSBundle mainBundle] pathForResource:@"CoreData" ofType:@"momd"]; if (!path) { path = [[NSBundle mainBundle] pathForResource:@"CoreData" ofType:@"mom"]; } NSAssert(path != nil, @"Unable to find Resource in main bundle"); CoreData is the name of my app. I've tried to put in initial data into the app by finding the path to the sqlite file in my iPhone simulator, and then going and inserting into that sqlite file. But at some point, I moved the sqlite (thinking it would create a fresh copy), deleted the app from the simulator, and the sqlite file is gone. I'm not sure if I'm leaving out some part of the process (this was a few hours ago) but the end result is that everything is screwed up. How do I resubstantiate this sqlite / momd file? "Clean" and "Clean all targets" are grayed out. I'm happy to post the relevant code from my app that would help shed some light on this problem but there's tons of code relating to Core Data which I don't understand, so I'm not sure what part to post! Any help is greatly appreciated.

    Read the article

  • MySQL to PostreSQL and Named Scope

    - by Lowgain
    I've got a named scope for one of my models that works fine. The code is: named_scope :inbox_threads, lambda { |user| { :include => [:deletion_flags, :recipiences], :conditions => ["recipiences.user_id = ? AND deletion_flags.user_id IS NULL", user.id], :group => "msg_threads.id" }} This works fine on my local copy of the app with a MySQL database, but when I push my app to Heroku (which only uses PostgreSQL), I get the following error: ActiveRecord::StatementInvalid (PGError: ERROR: column "msg_threads.subject" must appear in the GROUP BY clause or be used in an aggregate function: SELECT "msg_threads"."id" AS t0_r0, "msg_threads"."subject" AS t0_r1, "msg_threads"."originator_id" AS t0_r2, "msg_thr eads"."created_at" AS t0_r3, "msg_threads"."updated_at" AS t0_r4, "msg_threads"."url_key" AS t0_r5, "deletion_flags"."id" AS t1_r0, "deletion_flags"."user_id" AS t1_r1, "deletion_flags"."msg_thread_id" AS t1_r2, "deletion_flags"."confirmed" AS t1_r3, "deletion_flags"."created_at" AS t1_r4, "deletion_flags"."updated_at" AS t1_r5, "recipiences"."id" AS t2_r0, "recipiences"."user_id" AS t2_r1, "recipiences"."msg_thread_id" AS t2_r2, "recipiences"."created_at" AS t2_r3, "recipien ces"."updated_at" AS t2_r4 FROM "msg_threads" LEFT OUTER JOIN "deletion_flags" ON deletion_flags.msg_thread_id = msg_threads.id LEFT OUTER JOIN "recipiences" ON recipiences.msg_thread_id = msg_threads.id WHERE (recipiences.user_id = 1 AND deletion_flags.user_id IS NULL) GROUP BY msg_threads.id) I'm not as familiar with the working of Postgres, so what would I need to add here to get this working? Thanks!

    Read the article

  • How to filter node list based on the contents of another node list

    - by ~otakuj462
    Hi, I'd like to use XSLT to filter a node list based on the contents of another node list. Specifically, I'd like to filter a node list such that elements with identical id attributes are eliminated from the resulting node list. Priority should be given to one of the two node lists. The way I originally imagined implementing this was to do something like this: <xsl:variable name="filteredList1" select="$list1[not($list2[@id_from_list1 = @id_from_list2])]"/> The problem is that the context node changes in the predicate for $list2, so I don't have access to attribute @id_from_list1. Due to these scoping constraints, it's not clear to me how I would be able to refer to an attribute from the outer node list using nested predicates in this fashion. To get around the issue of the context node, I've tried to create a solution involving a for-each loop, like the following: <xsl:variable name="filteredList1"> <xsl:for-each select="$list1"> <xsl:variable name="id_from_list1" select="@id_from_list1"/> <xsl:if test="not($list2[@id_from_list2 = $id_from_list1])"> <xsl:copy-of select="."/> </xsl:if> </xsl:for-each> </xsl:variable> But this doesn't work correctly. It's also not clear to me how it fails... Using the above technique, filteredList1 has a length of 1, but appears to be empty. It's strange behaviour, and anyhow, I feel there must be a more elegant approach. I'd appreciate any guidance anyone can offer. Thanks.

    Read the article

  • Stored Procedure: Reducing Table Data

    - by SumGuy
    Hi Guys, A simple question about Stored Procedures. I have one stored procedure collecting a whole bunch of data in a table. I then call this procedure from within another stored procedure. I can copy the data into a new table created in the calling procedure but as far as I can see the tables have to be identical. Is this right? Or is there a way to insert only the data I want? For example.... I have one procedure which returns this: SELECT @batch as Batch, @Count as Qty, pd.Location, cast(pd.GL as decimal(10,3)) as [Length], cast(pd.GW as decimal(10,3)) as Width, cast(pd.GT as decimal(10,3)) as Thickness FROM propertydata pd GROUP BY pd.Location, pd.GL, pd.GW, pd.GT I then call this procedure but only want the following data: DECLARE @BatchTable TABLE ( Batch varchar(50), [Length] decimal(10,3), Width decimal(10,3), Thickness decimal(10,3), ) INSERT @BatchTable (Batch, [Length], Width, Thickness) EXEC dbo.batch_drawings_NEW @batch So in the second command I don't want the Qty and Location values. However the code above keeps returning the error: "Insert Error: Column name or number of supplied values does not match table"

    Read the article

  • Omit return type in C++0x

    - by Clinton
    I've recently found myself using the following macro with gcc 4.5 in C++0x mode: #define RETURN(x) -> decltype(x) { return x; } And writing functions like this: template <class T> auto f(T&& x) RETURN (( g(h(std::forward<T>(x))) )) I've been doing this to avoid the inconvenience having to effectively write the function body twice, and having keep changes in the body and the return type in sync (which in my opinion is a disaster waiting to happen). The problem is that this technique only works on one line functions. So when I have something like this (convoluted example): template <class T> auto f(T&& x) -> ... { auto y1 = f(x); auto y2 = h(y1, g1(x)); auto y3 = h(y1, g2(x)); if (y1) { ++y3; } return h2(y2, y3); } Then I have to put something horrible in the return type. Furthermore, whenever I update the function, I'll need to change the return type, and if I don't change it correctly, I'll get a compile error if I'm lucky, or a runtime bug in the worse case. Having to copy and paste changes to two locations and keep them in sync I feel is not good practice. And I can't think of a situation where I'd want an implicit cast on return instead of an explicit cast. Surely there is a way to ask the compiler to deduce this information. What is the point of the compiler keeping it a secret? I thought C++0x was designed so such duplication would not be required.

    Read the article

  • asp.net C# windows authentication iss config

    - by user1566209
    I'm developing a webpage where a need to know the users windows authentication values, more precisely the name. Others developments have been done with this kind of authentication but sadly for me their creators are long gone and i have no contact or documentation. I'm using Visual Studio 2008 and i'm accessing a webservice that is in a remote server. The server is a windows server 2008 r2 standard and is using ISS version 7.5. Since i have the source code of the other developments what i did was copy paste and was working fine when i was calling the webservice that was in my machine (localhost). The code is the following: //1st way WindowsPrincipal wp = new WindowsPrincipal(WindowsIdentity.GetCurrent()); string strUser = wp.Identity.Name;//ALWAYS GET NT AUTHORITY\NETWORK SERVICE //2nd way WindowsIdentity winId = WindowsIdentity.GetCurrent(); WindowsPrincipal winPrincipal = new WindowsPrincipal(winId); string user = winPrincipal.Identity.Name;//ALWAYS GET NT AUTHORITY\NETWORK SERVICE //3rd way IIdentity WinId = HttpContext.Current.User.Identity; WindowsIdentity wi = (WindowsIdentity)WinId; string userstr = wi.Name; //ALWAYS GET string empty btn_select.Text = userstr; btn_cancelar.Text = strUser; btn_gravar.Text = user; As you can see i have here 3 ways to get the same and in a sad manner show my user's name. As for my web.config i have: <authentication mode="Windows"/> <identity impersonate="true" /> In the IIS manager i have tried lots of combination of enable and disable between Anonymous Authentication, ASP.NET Impersonation, Basic Authentication, Forms Authentication and Windows Authentication. Can please someone help me?? NOTE: The respective values i get from each try are in the code

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Output problem in mysql query in MFC program

    - by D.Gaughan
    Im currently working on a small MFC program that outputs data from a mysql database. I can get output when im using an sql statement that does not contain any variable eg. select album from Artists; but when i try to use a variable the program compiles but i get no output eg. mysql_perform_query(conn,select album from Artists where artists = '"+m_search_edit"'") Here is the function for mysql_perform_query: MYSQL_RES* mysql_perform_query(MYSQL *conn, const char* query) { // send the query to the database if (mysql_query(conn, query)) { // printf("MySQL query error : %s\n", mysql_error(conn)); // exit(1); } return mysql_use_result(conn); } And here is the code block for outputting the data: struct connection_details mysqlD; mysqlD.server = "www.freesqldatabase.com"; // where the mysql database is mysqlD.user = "**********"; // the root user of mysql mysqlD.password = "***********"; // the password of the root user in mysql mysqlD.database = "***************"; // the databse to pick // connect to the mysql database conn = mysql_connection_setup(mysqlD); CStringA query; query.Format("select album from Artists where artist = '%s'", CT2CA(m_search_edit)); res = mysql_perform_query(conn, query); //res = mysql_perform_query (conn, "select distinct artist from Artists"); while((row = mysql_fetch_row(res)) != NULL){ CString str; UpdateData(); str = ("%s\n", row[0]); UpdateData(FALSE); m_list_control.AddString(str); } The m_search_edit variable is the variable for an edit box. I am using Visual Studio 2008 with one copy of this program unicode and one nonunicode, I also have a version built with VC++ 6. Any tips on how I can get output from the databse using the m_search_edit variable??

    Read the article

  • Ant path/pathelement not expanding properties correctly

    - by Jonas Byström
    My property gwt.sdk expands just fine everywhere else, but not inside a path/pathelement: <target name="setup.gwtenv"> <property environment="env"/> <condition property="gwt.sdk" value="${env.GWT_SDK}"> <isset property="env.GWT_SDK" /> </condition> <property name="gwt.sdk" value="/usr/local/gwt" /> <!-- Default value. --> </target> <path id="project.class.path"> <pathelement location="${gwt.sdk}/gwt-user.jar"/> </path> <target name="libs" depends="setup.gwtenv" description="Copy libs to WEB-INF/lib"> </target> <target name="javac" depends="libs" description="Compile java source"> <javac srcdir="src" includes="**" encoding="utf-8" destdir="war/WEB-INF/classes" source="1.5" target="1.5" nowarn="true" debug="true" debuglevel="lines,vars,source"> <classpath refid="project.class.path"/> </javac> </target> For instance, placing an echo of ${gwt.sdk} just above works, but not inside "project.class.path". Is it a bug, or do I have to do something that I'm not? Edit: I tried moving the property out from target setup.gwtenv into "global space", that helped circumvent the problem.

    Read the article

  • NoClassDefFoundError when trying to reference external jar files

    - by opike
    I have some 3rd party jar files that I want to reference in my tomcat web application. I added this line to catalina.properties: shared.loader=/home/ollie/dev/java/googleapi_samples/gdata/java/lib/*.jar but I'm still getting this error: org.apache.jasper.JasperException: javax.servlet.ServletException: java.lang.NoClassDefFoundError: com/google/gdata/util/ServiceException org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:491) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:401) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) I verified that the com.google.gdata.util.ServiceException is in the gdata-core-1.0.jar file which is in the directory: /home/ollie/dev/java/googleapi_samples/gdata/java/lib I did bounce tomcat after I modified catalina.properties. Update 1: I tried copying the gdata-core-1.0.jar file into /var/lib/tomcat6/webapp/examples/WEB-INF/lib as a test but that didn't fix the problem either. Update 2: It actually does work when I copy the jar file directly into the WEB-INF/lib directory. There was a permissions issue that I had to resolve. But it's still not working when I use the shared.loader setting. I reconfirmed that the path is correct.

    Read the article

  • Linking how PHP/HTML

    - by Azzyh
    Hello. Alright so im done doing pages with frames, and know i splitted up my design to top.php and bottom.php. In top.php it's linking to style.css and ajax_framework.js and such.. Now this works great until i tried to include top.php & bottom.php in videos/index.php (another dirr), i did like this: include "../top.php"; <-- and the originally index.php --> include "../bottom.php"; Now when i enter the site, it wont load style.css, probably because it thinks that style.css needs to be in videos/ directory, same goes with ajax_framework.js that are both files are on the parent ../ directory. Now what do i do? Some method or technique to solve this in a good way? So i dont have to copy style.css and ajax_framework.js into every directory i have, because then later if i want to change something i would need to change in all directories, which would be my last thing i want to do.. hope you got some nice techniques thank you

    Read the article

  • How to use db4o IObjectContainer in a web application ? (Container lifetime ?)

    - by driis
    I am evaluating db4o for persistence for a ASP .NET MVC project. I am wondering how I should use the IObjectContainer in a web context with regards to object lifetime. As I see it, I can do one of the following: Create the IObjectContainer at application startup and keep the same instance for the entire application lifetime. Create one IObjectContainer per request. Start a server, and get a client IObjectContainer for each database interaction. What are the implications of these options, in terms of performance and concurrency ? Since the database is locked when an IObjectContainer is opened, I am pretty sure that option 2) would get me some problems with concurrency - would this also be the case for option 1 ? As I understand it, if I retrieve an object from an IObjectContainer, it must be saved by the same IObjectContainer instance - in order for db4o to identify it as being the same object. Therefore, If I choose option 3), I would have to retrieve the original object, make the necessary changes (copy data from a modified object), and then store it using the same IObjectContainer. Is this true ?

    Read the article

  • Using Mercurial (hg), can you just "hg backout" all the commits you did for the files you don't want

    - by Jian Lin
    Using Mercurial (hg), can you just "hg backout" all the commits you did for the files you don't want to push, and then do a push? Because Mercurial (or Git) won't let us push a single file or a single folder to another repository, so I am thinking: 1) How about, we just look at the commit we did, and hg backout the ones we don't want to push. 2) hg out -v to see the list of files that will be pushed 3) now do the push by hg push Is this a good way? This is because I got the following advice: 1) Don't commit that file if you don't want it to be pushed (but sometimes even just for experimentation, I do want to keep the intermediate revisions) (-- maybe I can hg commit and hg backout right away to prevent it from being pushed.) 2) Some people told me just to hg clone tmp from that repository i want to push to, and then copy the local file over to this tmp working directory, hg commit to this tmp repository, and then do a push. But I found that the hg clone tmp will take up 400MB of new data and files, and make the hard drive work very hard, just to push 1 file? So I would rather not use this method.

    Read the article

  • Creating a HTTP handler for IIS that transparently forwards request to different port?

    - by Lasse V. Karlsen
    I have a public web server with the following software installed: IIS7 on port 80 Subversion over apache on port 81 TeamCity over apache on port 82 Unfortunately, both Subversion and TeamCity comes with their own web server installations, and they work flawlessly, so I don't really want to try to move them all to run under IIS, if that is even possible. However, I was looking at IIS and I noticed the HTTP redirect part, and I was wondering... Would it be possible for me to create a HTTP handler, and install it on a sub-domain under IIS7, so that all requests to, say, http://svn.vkarlsen.no/anything/here is passed to my HTTP handler, which then subsequently creates a request to http://localhost:81/anything/here, retrieves the data, and passes it on to the original requestee? In other words, I would like IIS to handle transparent forwards to port 81 and 82, without using the redirection features. For instance, Subversion doesn't like HTTP redirect and just says that the repository has been moved, and I need to relocate my working copy. That's not what I want. If anyone thinks this can be done, does anyone have any links to topics I need to read up on? I think I can manage the actual request parts, even with authentication, but I have no idea how to create a HTTP handler. Also bear in mind that I need to handle sub-paths and documents beneath the top-level domain, so http://svn.vkarlsen.no/whatever/here needs to be handled by a single handler, I cannot create copies of the handler for all sub-directories since paths are created from time to time.

    Read the article

  • Problems with video conversions through the web (local host)

    - by ron-d
    Hello, I get the following errors when I attempt video format conversions called from the local host: “An invalid media type was specified” for M4V to WMV conversions. “One or more arguments are invalid” for MP4 to WMV conversions. Here are the details of the problems: I’ve written a dll in C# that accepts videos in the formats AVI, WMV, M4V and MP4 and performs the following actions: Creates a copy of the input video in WMV format . Creates a WAV file of the input video audio portion. Creates a JPG image from a frame of the input video. I attached the dll to an ASP.NET web project that performs the dll actions. When tested through the developer studio, the actions are performed as intended for all formats. When I place the web project in place to be read when the local host is called through the web browser, the following behavior takes place: WMV format: All actions performed as intended. AVI format: Creates WMV file – OK Creates JPG image – OK Creates empty WAV file – problem. M4V format: Creates empty WAV file – problem. Does not create WMV file -problem Does not create JPG file –problem Throws me the error “An invalid media type was specified” MP4 format: Creates empty WAV file – problem. Does not create WMV file -problem Does not create JPG file –problem Throws me the error “One or more arguments are invalid” When I check their security property, all the files have the same permission access parameters (when I check their security property. Can anyone guide me as to how to solve these problems when the web project is called from the local host? Thank you.

    Read the article

  • Problem shrinking with StretchBlt()

    - by SparkyNZ
    Hi. I have some code that paints my own rectangular buttons based on a source bitmap. Most of the time the destination buttons are bigger than my source bitmap image and StretchBlt works fine. However, when the destination is smaller than the source image, StretchBlt refuses to fill the entire destination area. I know StretchBlt isn't great on quality when it comes to scaling down images but I'm not too concerned about that. I just don't want missing pixels. Here a link with the source image at the top and destination at the bottom: link text Note, I am actually shrinking parts of the source image into the destination. I am not shrinking the entire image down. So for example, I copy the corners size for size with BitBlt() then I stretch (squash) the horizontal pixel data between the corners from the source image into the destination DC. There is no fault with my source and destination coordinates. If I change from SRCCOPY to WHITENESS, the entire area fills with white as you'd expect. There is no grey bar where pixels haven't copied as you see in the Broken.bmp image above. Has anyone had this problem before, and if so, can somebody please suggest a solution? Cheers

    Read the article

  • Force download working, but showing invalid when trying to open locally.

    - by Cody Robertson
    Hi, I wrote this function and everything works well till i try to open the downloaded copy and it shows that the file is invalid. Here is my function function download_file() { //Check for download request: if(isset($_GET['file'])) { //Make sure there is a file before doing anything if(is_file($this->path . basename($_GET['file']))) { //Below required for IE: if(ini_get('zlib.output_compression')) { ini_set('zlib.output_compression', 'Off'); } //Set Headers: header('Pragma: public'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Last-Modified: ' . gmdate('D, d M Y H:i:s', $this->path . basename($_GET['file'])) . ' GMT'); header('Content-Type: application/force-download'); header('Content-Disposition: inline; filename="' . basename($_GET['file']) . '"'); header('Content-Transfer-Encoding: binary'); header('Content-Length: ' . filesize($this->path . basename($_GET['file']))); header('Connection: close'); readfile($this->path . basename($_GET['file'])); exit(); } } }

    Read the article

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • Getting Started: Silverlight 4 Business Application

    - by Eric J.
    With the arrival of VS 2010 and Silverlight 4, I decided it's time to look into Silverlight and understand how to build a 3-Tier business application. After several hours of searching for and reading documentation and tutorials, I'm thoroughly confused (and that doesn't happen easily). Here are some specific points I don't understand. I welcome guidance on any of them, and also would appreciate any references to a really good tutorial. Brad Abrahm's What is a .NET RIA services (written for Silverlight 3) seemed very promising, until I realized I don't have System.Web.Ria.dll on my system. Am I missing an optional download? Was this rolled into another DLL for Silverlight 4? Did this go away in favor of something else in Silverlight 4? This recent blog says to start from a Silverlight Business Application, remove unwanted stuff, create a WCF RIA services Class Library project, and copy files and references from the Business Application to the WCF RIA services project, while manually updating resource references (perhaps bug in B2 compiler). Is this really the right road to go down? It seems very clumsy. My requirements are to perform very simple CRUD on straightforward business objects. I'm looking forward to suggestions on how to do that the Silverlight 4 way.

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >