Search Results

Search found 11930 results on 478 pages for 'shared machines'.

Page 164/478 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Just messed up a server misusing chown, how to execute it correctly?

    - by Jack Webb-Heller
    Hi! I'm moving from an old shared host to a dedicated server at MediaTemple. The server is running Plesk CP, but, as far as I can tell, there's no way via the Interface to do what I want to do. On the old shared host, running cPanel, I creative a .zip archive of all the website's files. I downloaded this to my computer, then uploaded it with FTP to the new host account I'd set up. Finally, I logged in via SSH, navigated to the directory the zip was stored in (something like var/www/vhosts/mysite.com/httpdocs/ and ran the unzip command on the file sitearchive.zip. This extracted everything just the fine. The site appeared to work just fine. The problem: When I tried to edit a file through FTP, I got Error - 160: Permission Denied. When I Get Info for the file I'm trying to edit, it says the owner and group is swimwir1. I attemped to use chown at this point to change owner - and yes, as you may be able to tell, I'm a little inexperienced in SSH ;) luckily the server was new, since the command I ran - chown -R newuser / appeared to mess a load of stuff up. The reason I used / on the end rather than /var/www/vhosts/mysite.com/httpdocs/ was because I'd already cded into their, so I presumed the / was relative to where I was working. This may be the case, I have no idea, either way - Plesk was no longer accessible, although Apache and things continued to work. I realised my mistake, and deciding it wasn't worth the hassle of 1) being an amateur and 2) trying to fix it, I just reprovisioned the server to start afresh. So - what do I do to change the owner of these files correctly? Thanks for helping out a confused beginner! Jack

    Read the article

  • Visual Studio 2008 project file does not load because of an unexpected encoding change.

    - by Xenan
    In our team we have a database project in visual Studio 2008 which is under source control by Team Foundation Server. Every two weeks or so, after one co-worker checks in, the project file won't load on the other developers machines. The error message is: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1. When I look at the project file in Notepad++, the file looks like this: ??<NUL?NULxNULmNULlNUL NULvNULeNULrNULsNULiNULoNULnNUL ... and so on (you can see <?xml version in this) whereas an normal project file looks like: <?xml version="1.0" encoding="utf-16"?> ... So probably something is wrong with the encoding of the file. This is a problem for us because it turns out to be impossible to get the file encoding correct again. The 'solution' is to throw away the project file an get the last know working version from source control. According to the file, the encoding should be UTF-16. According to Notepad++, the corrupted file is actually UTF-8. My questions are: Why is Visual Studio messing up the encoding of the project file, apparently at random times and at random machines? What should we do to prevent this? When it has happened, is there a possibility to restore the current file in the correct encoding instead of pulling an older version from source control? As a last note: the problem is with one single project file, all other project files don't expose this problem.

    Read the article

  • Have error message show when form is created through AJAX

    - by Railslearner
    I have a page called /add that you can add a Dog on and the form is in its own partial. I'm using Simple Form and Twitter Bootstrap. I added the files for the main Bootstrap but use a gem for simple_form to work with it just so you know. DogsController # new.js.erb (deleted new.html.erb) def new @dog = Dog.new respond_to do |format| format.js end end # create.js.erb def create @dog = current_user.dogs.new(params[:dog]) respond_to do |format| if @dog.save format.html { redirect_to add_url, notice: 'Dog was successfully added.' } format.json { render json: @dog, status: :created, location: @dog} format.js else format.html { render 'pages/add' } format.json { render json: @dog.errors, status: :unprocessable_entity } end end end dogs/_form.html.erb <%= simple_form_for(@dog, :remote => true) do |f| %> <%= render :partial => "shared/error_message", :locals => { :f => f } %> <%= f.input :name %> <%= f.button :submit, 'Done' %> <% end %> This line: <%= render :partial => "shared/error_message", :locals => { :f => f } %> Is for bootstrap so it renders the errors html correctly. PagesController def add respond_to do |format| format.html end end pages/add.html.erb <div id="generate-form"> </div> dogs/new.js.erb $("#generate-form").html("<%= escape_javascript(render(:partial => 'dogs/form', locals: { dog: @dog })) %>"); Now how would I get this to render the error partial as if it was still on my dogs/new.html.erb since its being created through AJAX? I don't need client side validations do I?

    Read the article

  • Difference between KeywordQuery, FullTextQuerySearch type for Object Model and Web service Query

    - by Raghu
    Initially I believed these 3 to be doing more or less the same thing with just the notation being different. Until recently, when i noticed that their does exists a big difference between the results of the KeyWordQuery/FullTextQuerySearch and Web service Query. I used both KeywordQuery and FullText method to search of the the value of a customColumn XYZ with value (ASDSADA-21312ASD-ASDASD):- When I run this query as:- FullTextSqlQuery:- FullTextSqlQuery myQuery = new FullTextSqlQuery(site); { // Construct query text String queryText = "Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD') "; myQuery.QueryText = queryText; myQuery.ResultTypes = ResultType.RelevantResults; }; // execute the query and load the results into a datatable ResultTableCollection queryResults = myQuery.Execute(); ResultTable resultTable = queryResults[ResultType.RelevantResults]; // Load table with results DataTable queryDataTable = new DataTable(); queryDataTable.Load(resultTable, LoadOption.OverwriteChanges); I get the following result representing the document. * Title: TestPDF * path: http://SharepointServer/Shared Documents/Forms/DispForm.aspx?ID=94 * author: null * isDocument: false Do note the Path and isDocument fields of the above result. Web Service Method Then I tried a Web Service Query method. I used Sharepoint Search Service Tool available at http://sharepointsearchserv.codeplex.com/ and ran the same query i.e. Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD'). This time I got the following results:- * Title: TestPDF * path: http://SharepointServer/Shared Documents/TestPDF.pdf * author: null * isDocument: true Again note the path. While the search results from 2nd method are useful as they provide me the file path exactly, I can't seem to understand why is the method 1 not giving me the same results? Why is there a discrepancy between the two results?

    Read the article

  • iPhone static library Clang/LLVM error: non_lazy_symbol_pointers

    - by Bekenn
    After several hours of experimentation, I've managed to reduce the problem to the following example (C++): extern "C" void foo(); struct test { ~test() { } }; void doTest() { test t; // 1 foo(); // 2 } This is being compiled for iOS devices in XCode 4.2, using the provided Clang compiler (Apple LLVM compiler 3.0) and the iOS 5.0 SDK. The project is configured as a Cocoa Touch Static Library, and "Enable Linking With Shared Libraries" is set to No because I'm building an AIR native extension. The function foo is defined in another external library. (In my actual project, this would be any of the C API functions defined by Adobe for use in AIR native extensions.) When attempting to compile this code, I get back the error: FATAL:incompatible feature used: section type non_lazy_symbol_pointers (must specify "-dynamic" to be used) clang: error: assembler command failed with exit code 1 (use -v to see invocation) The error goes away if I comment out either of the lines marked 1 or 2 above, or if I change the build setting "Enable Linking With Shared Libraries" to Yes. (However, if I change the build setting, then I get multiple ld warning: unexpected srelocation type 9 warnings when linking the library into the final project, and the application crashes when running on the device.) The build error also goes away if I remove the destructor from test. So: Is this a bug in Clang? Am I missing some all-important and undocumented build setting? The interaction between an externally-provided function and a struct with a destructor is very peculiar, to say the least.

    Read the article

  • Java: library that does nice formatted log outputs

    - by WizardOfOdds
    I cannot find back a library that allowed to format log output statements in a much nicer way than what is usually seen. One of the feature I remember is that it could 'offset' the log message depending on the 'nestedness' of where the log statement was occuring. That is, instead of this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java| 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java| 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 459 | Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java| 789 | Another debug message It would shows something like this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java | 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java | 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 459 | |__ Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java | 789 | |__ Another debug message This is an example I just made up (VeryLongCamelCaseClassNamesNotMine). But I remember seeing such cleanly formatted log output and they were really much nicer than anything I had seen before and, in addition to being just plain nicer, they were also easier to read for they reproduced some of the logical organization of the code. Yet I cannot find anymore what that library was. I'm pretty sure it was fully compatible with log4j or sl4j.

    Read the article

  • Can't return a List from a Compiled Query.

    - by Andrew
    I was speeding up my app by using compiled queries for queries which were getting hit over and over. I tried to implement it like this: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id) End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, List(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ (From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u).ToList()) This did not work and I got this error message: Type : System.ArgumentNullException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Value cannot be null. Parameter name: value However, when I changed my compiled query to return IQueryable instead of List like so: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id).ToList() End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, IQueryable(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u) It worked fine. Can anyone shed any light as to why this is? BTW, compiled queries rock! They sped up my app by a factor of 2.

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • production vs dev server content-disposition filename encoding

    - by rgripper
    I am using asp.net mvc3, download file in the same browser (Chrome 22). Here is the controller code: [HttpPost] public ActionResult Uploadfile(HttpPostedFileBase file)//HttpPostedFileBase file, string excelSumInfoId) { ... return File( result.Output, "application/vnd.ms-excel", String.Format("{0}_{1:yyyy.MM.dd-HH.mm.ss}.xls", "????????????", DateTime.Now)); } On my dev machine I download a programmatically created file with the correct name "????????????_2012.10.18-13.36.06.xls". Response: Content-Disposition:attachment; filename*=UTF-8''%D0%A1%D1%83%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5_2012.10.18-13.36.06.xls Content-Length:203776 Content-Type:application/vnd.ms-excel Date:Thu, 18 Oct 2012 09:36:06 GMT Server:ASP.NET Development Server/10.0.0.0 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 And from production server I download a file with the name of the controller's action + correct extension "Uploadfile.xls", which is wrong. Response: Content-Disposition:attachment; filename="=?utf-8?B?0KHRg9C80LzQuNGA0L7QstCw0L3QuNC1XzIwMTIuMTAuMTgtMTMuMzYu?=%0d%0a =?utf-8?B?NTUueGxz?=" Content-Length:203776 Content-Type:application/vnd.ms-excel Date:Thu, 18 Oct 2012 09:36:55 GMT Server:Microsoft-IIS/7.5 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 X-Powered-By:ASP.NET Web.config files are the same on both machines. Why does filename gets encoded differently for the same browser? Are there any kinds of default settings in web.config that are different on machines that I am missing?

    Read the article

  • GCC Dynamic library building problem

    - by Sirish Kumar
    I am new to linux, while compiling with dynamic library I am getting the segmentationfault error. I have two files ctest1.c void ctest1(int *i) { *i =10; } ctest2.c void ctest2(int *i) { *i =20; } I have compiled both files to a shared library named libtest.so using following command gcc -shared -W1,-soname,libtest.so.1 -o libtest.so.1.0.1 ctest1.o ctest2.o -lc And I have wrote another program prog.c which uses functions exported by this library prog.c #include void (ctest1)(int); void (ctest2)(int*); int main() { int a; ctest1(&a); printf("%d",a); return 0; } And when I have built the executable with following command gcc -Wall prog.c -L. -o prog But when I run the generated executable I get the SegmentationFault error. When I checked the header of prog with ldd it shows linux-vdso.so.1 = (0x00007f99dff000) libc.so.6 = /lib64/libc.so.6 (0x0007feeaa8c1000) /lib64/ld-linux-x86-64.so.2 (0x00007feeaac1c000) Can somebody tell what is the problem

    Read the article

  • DVCS with a Windows central repository

    - by Mikko Rantanen
    We are currently using VSS for version control. Quite few of our developers are interested in a distributed model (And want to get rid of VSS). Our network is full of Windows machines and while our IT department has experience maintaining Linux machines they would prefer not to. What DVCS systems can host their central repository on Windows while providing.. Push access to the repository. Basic authentication. Mostly just a way to allow or deny access to the whole repository. No need for fine grained access. Server process so users don't need write right to the repository reducing the risk of accidentally messing with it. On the client side a GUI such as Tortoise would be more or less a requirement (Sorry, Windows shell sucks. :|). Ease of installation would be a huge plus as our IT department is already quite low on resources. And using windows credentials for authentication would be an advantage but not a requirement as long as the client is able to store the credentials. I have had a (really) quick look at Git, Mercurial and Bazaar. Git seemed to use ssh or simple WebDAV for repository access, requiring write permission for the users. Mercurial had a built in http server, but this seemed to be only for pull purposes. Update: Mercurial supports push as well. Bazaar Seemed to use sftp for repository access, again requiring a write permission for the users. Are there windows server processes for any DVCS systems and has anyone managed to set one up in a Windows land? And apologies if this is a duplicate question. I couldn't find one. Update Got Mercurial working for push purposes! Detailed list what was required can be found as an answer below.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • is it possible to turn off vdso on glibc side?

    - by heroxbd
    I am aware that passing vdso=0 to kernel can turn this feature off, and that the dynamic linker in glibc can automatic detect and use vdso feature from kernel. Here I met with this problem. There is a RHEL 5.6 box (kernel 2.6.18-238.el5) in my institution where I only have a normal user access, probably suffering from RHEL bug 673616. As I compile a toolchain of linux-headers-3.9/gcc-4.7.2/glibc-2.17/binutils-2.23 on top of it, gcc bootstrap fails in cc1 in stage2 cannnot be run Program received signal SIGSEGV, Segmentation fault. 0x00002aaaaaaca6eb in ?? () (gdb) info sharedlibrary From To Syms Read Shared Object Library 0x00002aaaaaaabba0 0x00002aaaaaac3249 Yes (*) /home/benda/gnto/lib64/ld-linux-x86-64.so.2 0x00002aaaaacd29b0 0x00002aaaaace2480 Yes (*) /home/benda/gnto/usr/lib/libmpc.so.3 0x00002aaaaaef2cd0 0x00002aaaaaf36c08 Yes (*) /home/benda/gnto/usr/lib/libmpfr.so.4 0x00002aaaab14f280 0x00002aaaab19b658 Yes (*) /home/benda/gnto/usr/lib/libgmp.so.10 0x00002aaaab3b3060 0x00002aaaab3b3b50 Yes (*) /home/benda/gnto/lib/libdl.so.2 0x00002aaaab5b87b0 0x00002aaaab5c4bb0 Yes (*) /home/benda/gnto/usr/lib/libz.so.1 0x00002aaaab7d0e70 0x00002aaaab80f62c Yes (*) /home/benda/gnto/lib/libm.so.6 0x00002aaaaba70d40 0x00002aaaabb81aec Yes (*) /home/benda/gnto/lib/libc.so.6 (*): Shared library is missing debugging information. and a simple program #include <sys/time.h> #include <stdio.h> int main () { struct timeval tim; gettimeofday(&tim, NULL); return 0; } get segment fault in the same way if compiled against glibc-2.17 and xgcc from stage1. Both cc1 and the test program can be run on another running RHEL 5.5 (kernel 2.6.18-194.26.1.el5) with gcc-4.7.2/glibc-2.17/binutils-2.23 as normal user. I cannot simply upgrade the box to a newer RHEL version, nor could I turn VDSO off via sysctl or proc. The question is, is there a way to compile glibc so that it turns off VDSO unconditionally?

    Read the article

  • How do I make software that preserves database integrity and correctness?

    - by user287745
    I have made an application project in Visual Studio 2008 C#, SQL Server from Visual Studio 2008. The database has like 20 tables and many fields in each. I have made an interface for adding deleting editing and retrieving data according to predefined needs of the users. Now I have to Make to project into software which I can deliver to my professor. That is, he can just double click the icon and the software simply starts. No Visual Studio 2008 needed to start the debugging. The database will be on one powerful computer (dual core latest everything Windows XP) and the user will access it from another computer connected using LAN. I am able to change the connection string to the shared database using Visual Studio 2008/ debugger whenever the server changes but how am I supposed to do that when it's software? There will by many clients. Am I supposed to give the same software to every one, so they all can connect to the database? How will the integrity and correctness of the database be maintained? I mean the db.mdf file will be in a folder which will be shared with read and write access. So it's not necessary that only one user will write at a time. So is there any coding for this or?

    Read the article

  • Return an empty collection when Linq where returns nothing

    - by ahsteele
    I am using the below statement with the intent of getting all of the machine objects from the MachineList collection (type IEnumerable) that have a MachineStatus of i. The MachineList collection will not always contain machines with a status of i. At times when no machines have a MachineStatus of i I'd like to return an empty collection. My call to ActiveMachines (which is used first) works but InactiveMachines does not. public IEnumerable<Machine> ActiveMachines { get { return Customer.MachineList .Where(m => m.MachineStatus == "a"); } } public IEnumerable<Machine> InactiveMachines { get { return Customer.MachineList .Where(m => m.MachineStatus == "i"); } } Edit Upon further examination it appears that any enumeration of MachineList will cause subsequent enumerations of MachineList to throw an exeception: Object reference not set to an instance of an object. Therefore, it doesn't matter if a call is made to ActiveMachines or InactiveMachines as its an issue with the MachineList collection. This is especially troubling because I can break calls to MachineList simply by enumerating it in a Watch before it is called in code. At its lowest level MachineList implements NHibernate.IQuery being returned as an IEnumerable. What's causing MachineList to lose its contents after an initial enumeration?

    Read the article

  • what kind of relationship is there between a common wall and the rooms that located next to it?

    - by siamak
    I want to know whats the Relationship between a common wall (that are located In an adjoining room ) and the rooms. As i know the relationship between a room and its walls is Composition not Aggregation (am i right ?) And according to the definition of Composition the contained object can't be shared between two containers, whereas in aggregation it is possible. now i am confused that whats the best modeling approach to represent the relationship between a common wall and the rooms located next to it ? It would be highly Appreciated if you could provide your advices with some code. |--------|--------| Approch1: (wall class ---- room class) /Composition Approach2: wall class ----- room class /Aggregation Approch3: we have a wall class and a Common wall class , Common wall class inherits from wall class adjoining room class ---- (1) Common wall class /Aggregation adjoining room class ---- (6) wall class / composition Approach4: I am a developer not a designer :) so this is my idea : class Room { private wall _firstwall ; private wall _secondtwall; private wall _thirdwall ; private wall _commonwall ; public Room( CommonWall commonwall) { _firstwall=new Wall(); _secondtwall=new Wall(); _thirdwall=new Wall(); _commonwall=commonwall; } } Class CommonWall:Wall { //... } // in somewher : static void main() { Wall _commonWall=new Wall(); Room room1=new Room(_commonWall); Room room2=new Room(_commonWall); Room [] adjacentRoom =new Room[2]{room1,room2}; } Edit 1: I think this is a clear question but just for more clarification : The point of the question is to find out whats the best pattern or approach to model a relationship for an object that is a component of two other objects in the same time. and about my example : waht i mean by a "room" ?,surely i mean an enclosed square room with 4 walls and one door.but in this case one of these walls is a common wall and is shared between two adjacent rooms.

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Linq: Why won't Group By work when Querying DataSets?

    - by jrcs3
    While playing with Linq Group By statements using both DataSet and Linq-to-Sql DataContext, I get different results with the following VB.NET 10 code: #If IS_DS = True Then Dim myData = VbDataUtil.getOrdersDS #Else Dim myData = VbDataUtil.GetNwDataContext #End If Dim MyList = From o In myData.Orders Join od In myData.Order_Details On o.OrderID Equals od.OrderID Join e In myData.Employees On o.EmployeeID Equals e.EmployeeID Group By FullOrder = New With { .OrderId = od.OrderID, .EmployeeName = (e.FirstName & " " & e.LastName), .ShipCountry = o.ShipCountry, .OrderDate = o.OrderDate } _ Into Amount = Sum(od.Quantity * od.UnitPrice) Where FullOrder.ShipCountry = "Venezuela" Order By FullOrder.OrderId Select FullOrder.OrderId, FullOrder.OrderDate, FullOrder.EmployeeName, Amount For Each x In MyList Console.WriteLine( String.Format( "{0}; {1:d}; {2}: {3:c}", x.OrderId, x.OrderDate, x.EmployeeName, x.Amount)) Next With Linq2SQL, the grouping works properly, however, the DataSet code doesn't group properly. Here are the functions that I call to create the DataSet and Linq-to-Sql DataContext Public Shared Function getOrdersDS() As NorthwindDS Dim ds As New NorthwindDS Dim ota As New OrdersTableAdapter ota.Fill(ds.Orders) Dim otda As New Order_DetailsTableAdapter otda.Fill(ds.Order_Details) Dim eda As New EmployeesTableAdapter eda.Fill(ds.Employees) Return ds End Function Public Shared Function GetNwDataContext() As NorthwindL2SDataContext Dim s As New My.MySettings Return New NorthwindL2SDataContext(s.NorthwindConnectionString) End Function What am I missing? If it should work, how do I make it work, if it can't work, why not (what interface isn't implemented, etc)?

    Read the article

  • Debugging fortran code in Eclipse with Photran and GDB debugger: missing symbols

    - by tvandenbrande
    I have a program, written in fortran90, previously successfully compiled on a compaq compiler and working, that I'm now trying to compile with gfortran. I can compile the code to an .exe and run it. It works fine until a certain point in the routine and then an error is thrown. My current configuration: Windows 7 Eclipse Juno with CDT Photran Cygwin installation with gfortran compiler and GDB debugger (gdb.exe) Configurations for the debugger: GDB command set: Standard (Windows) Protocol: mi Shared libraries: don't load shared library symbols automatically (when activating this, no changes are noted). When running the debug command I get the following output: .gdbinit: No such file or directory. Reading symbols from /cygdrive/c/Users/thys/Documents/doctoraat/12_in progress/Hamfem/Debug/Hamfem.exe...done. auto-solib-add on Undefined command: "auto-solib-add". Try "help". Warning: C:/Users/thys/Documents/doctoraat/12_in progress/Hamfem/Hamfem/in: No such file or directory. [New Thread 5816.0x1914] [New Thread 5816.0x654] Basicly that leaves me with 2 questions: Where can I find the .gdbinit file in the cygwin installation? Are there any other possible errors in my setup, or points to think about?

    Read the article

  • How to deploy ClickOnce .Net 3.5 application on 3.0 machine

    - by Buthrakaur
    I have .Net 3.5 SP1 WPF application which I'm successfully deploying to client computers using ClickOnce. Now I got new requirement - one of our clients need to run the application on machines equipped just with .Net 3.0 and it's entirely impossible to upgrade or install anything on the machines. I already tried to run the 3.5 application with some of the 3.5FW DLLs copied to the application directory and it worked without any problems. The only problem at the moment is ClickOnce. I already made it to include the 3.5FW System.*.dll files in list of application files, but it always aborts installation on 3.0 machine with this error message: Unable to install or run the application. The application requires that assembly System.Core Version 3.5.0.0 be installed in the Global Assembly Cache (GAC) first. Please contact your system administrator. I already tried to tweak prerequisites on Publish tab of my project, but no combination solved the issue. What part of ClickOnce is responsible for checking prerequisites? I already tried to deploy using mageui.exe, but the 3.5FW error is still present. What should I do to fore ClickOnce to stop checking any prerequisites at all? The project is created using VS2010.

    Read the article

  • gethostbyname fails for local hostname after resuming from hibernate (Vista+7?)

    - by John
    Just wondering if anyone else has spotted this: On some user's machines running our software, occasionally the call to Win32 winsock gethostbyname fails with error code 11004. For the argument to gethostbyname, I'm passing in the result from gethostname. Now the docs say 11004 is WSANO_DATA. None of the descriptions seem to be relevant (it occurs if you pass in an IP6 address, but as I say, I'm passing in a hostname). Even more interesting is that the MSDN suggests that this combination (gethostname followed by gethostbyname) should never fail, not even if there is no IP address (in that case it would just return empty list of IPs). Here is the quote from the gethostname MSDN entry: ...it is guaranteed that the name returned will be successfully parsed by gethostbyname and WSAAsyncGetHostByName. It only ever happens after resuming from hibernate, in that short period when the network is restarting, and only on Vista/7 (well I've only seen it on Vista and 7). One theory I had was that it related to IP6. Maybe for a short period the network reports an IP6 address but not the corresponging IP4 address (I'm pretty sure that all the client machines are dual IP stack, but I could be wrong). I tried to reproduce by turning off my network card (to force no IP addresses) and couldn't reproduce. Anyone seen this before? Any ideas? John

    Read the article

  • Opaque tenant identification with SQL Server & NHibernate

    - by Anton Gogolev
    Howdy! We're developing a nowadays-fashionable multi-tenanted SaaS app (shared database, shared schema), and there's one thing I don't like about it: public class Domain : BusinessObject { public virtual long TenantID { get; set; } public virtual string Name { get; set; } } The TenantID is driving me nuts, as it has to be accounted for almost everywhere, and it's a hassle from security standpoint: what happens if a malicious API user changes TenantID to some other value and will mix things up. What I want to do is to get rid of this TenantID in our domain objects altogether, and to have either NHibernate or SQL Server deal with it. From what I've already read on the Internets, this can be done with CONTEXT_INFO (here's a NHibernatebased implementation), NHibernate filters, SQL Views and with combination thereof. Now, my requirements are as follows: Remove any mentions of TenantID from domain objects ...but have SQL Server insert it where appropriate (I guess this is achieved with default constraints) ...and obviously provide support for filtering based on this criteria, so that customers will never see each other's data If possible, avoid SQL Server views. Have a solution which plays nicely with NHibernate, SQL Servers' MARS and general nature of SaaS apps being highly concurrent What are your thoughts on that?

    Read the article

  • Distributing a bundle of files across an extranet

    - by John Zwinck
    I want to be able to distribute bundles of files, about 500 MB per bundle, to all machines on a corporate "extranet" (which is basically a few LANs connected using various private mechanisms, including leased lines and VPN). The total number of hosts is roughly 100, and the goal is to get a copy of the bundle from one host onto all the other hosts reliably, quickly, and efficiently. One important issue is that some hosts are grouped together on single fast LANs in which case the network I/O should be done once from one group to the next and then within each group between all the peers. This is as opposed to a strict central server system where multiple hosts might each fetch the same bundle over a slow link, rather than once via the slow link and then between each other quickly. A new bundle will be produced every few days, and occasionally old bundles will be deleted (but that problem can be solved separately). The machines in question happen to run recent Linuxes, but bonus points will go to solutions which are at least somewhat cross-platform (in which case the bundle might differ per platform but maybe the same mechanism can be used). That's pretty much it. I'm not opposed to writing some code to handle this, but it would be preferable if it were one of bash, Python, Ruby, Lua, C, or C++.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >