Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 207/1952 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • SELinux adding new allowed samba type to access httpd_sys_content_t?

    - by Josh
    allow samba_share_t httpd_sys_content_t {read execute getattr setattr write}; allow smbd_t httpd_sys_content_t {read execute getattr setattr write}; I am taking a stab in the dark with resources I've looked at, at various places that the above policies are what I want. I basically want to allow Samba to write to my web docs without giving it free access to the operating system. I read a post by a NSA rep saying the best way was defining a new type and allowing both samba and httpd access. Setting the content to public content (public_content_rw_t) does not work without making use of some unrestrictive booleans. To state this in short, how do I allow samba to access a new type?

    Read the article

  • I'm a CS student, and honestly I don't understand Knuth's books..

    - by Raymond Ho
    I stumbled this quote from Bill Gates: "You should definitely send me a resume if you can read the whole thing." He was talking about The Art of Programming books.. So I was pretty curious and want to read it all but honestly, I don't understand it at all.. I'm really not that highly intellectual being.. So this should be the reason why I can't understand it, but I am eager to learn.. I'm currently reading volume 1 about fundamental algo.. So is there any books out there that are friendly to novice/slow people like me? So I can build up myself and hopefully in the future I can read Knuth's book at ease..

    Read the article

  • LASTDATE dates arguments and upcoming events #dax #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Recently I had to write a DAX formula containing a LASTDATE within the logical condition of a FILTER: I found that its behavior was not the one I expected and I further investigated. At the end, I wrote my findings in this article on SQLBI, which can be applied to any Time Intelligence function with a <dates> argument.The key point is that when you write LASTDATE( table[column] )in reality you obtain something like LASTDATE( CALCULATETABLE( VALUES( table[column] ) ) )which converts an existing row context into a filter context.Thus, if you have something like FILTER( table, table[column] = LASTDATE( table[column] ) the FILTER will return all the rows of table, whereas you probably want to use FILTER( table, table[column] = LASTDATE( VALUES( table[column] ) ) )so that the existing filter context before executing FILTER is used to get the result from VALUES( table[column] ), avoiding the automatic expansion that would include a CALCULATETABLE that would hide the existing filter context.If after reading the article you want to get more insights, read the Jeffrey Wang's post here.In these days I'm speaking at SQLRally Nordic 2012 in Copenhagen and I will be in Cologne (Germany) next week for a SSAS Tabular Workshop, whereas Alberto will teach the same workshop in Amsterdam one week later. Both workshops still have seats available and the Amsterdam's one is still in early bird discount until October 3rd!Then, in November I expect to meet many blog readers at PASS Summit 2012 in Seattle and I hope to find the time to write other article on interesting things on Tabular and PowerPivot. Stay tuned!

    Read the article

  • Where is it permissible to add logging code in a MVC model?

    - by BDotA
    Working on a C# WinForms program that is written in a MVC ( actually Model-View-Presenter) style and I want to add a few lines of code that is responsible for logging some events. Where should I write two or three lines of code that I need? Should I write it in the Presenter section? To get an idea, here is some lines of sample code that already exists in the Save() metohd in Company.MyApplication.Presenter.MyPresenter.cs class: he has written codes lie the following in this part of presenter: private void Save(Helper.SaveStatusEnum status) { if (notification.CheckLocks(orderIdCollection)) { using (new HourglassController()) { controller.FireActiveCellLeaving(); ViewDocumentedValues(); int result = saveController.Save(status); if (result == Helper.SAVE_SUCCESSFUL) { // IS IT OK TO WRITE MY COUPLE LINES OF CODE IN HERE??????????? model.Dirty = false; if ((model.CurrentStatus == Helper.OrderStatusEnum.Complete) || (model.CurrentStatus == Helper.OrderStatusEnum.Corrected)) { controller.EnableDisableSheet(false); } CheckApplicationState(); SheetHelper.ClearUnsavedDataRowImage(view.ActiveSheet); } else { MessageBox.Show("An unexpected error occuring trying to save."); } } } }

    Read the article

  • I'm a CS student, and honestly, I don't understand Knuth's books

    - by Raymond Ho
    I stumbled upon this quote from Bill Gates: "You should definitely send me a resume if you can read the whole thing." He was talking about The Art of Programming books. So I was pretty curious and want to read it all. But honestly, I don't understand it. I'm really not that intellectual. So this should be the reason why I can't understand it, but I am eager to learn. I'm currently reading Volume 1 about fundamental algorithms. Are there any books out there that are friendly for novices/slow people like me, which would help to build up my knowledge so that I can read Knuth's book with ease in the future?

    Read the article

  • What permissions / ownership to set on PHP Sessions Folder when running FastCGI / PHP-FPM (as user "nobody")?

    - by Professor Frink
    I'm having trouble getting a number of scripts running because PHP-FPM can't write to my session folder: "2009/10/01 23:54:07 [error] 17830#0: *24 FastCGI sent in stderr: "PHP Warning: Unknown: open(/var/lib/php/session/sess_cskfq4godj4ka2a637i5lq41o5, O_RDWR) failed: Permission denied (13) in Unknown on line 0 PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0" while reading upstream" Obviously this is a permission issue; my session folder's owner/group is the webserver's user, NGINX. PHP-FPM runs as nobody though, and hence adding it to the nginx group is not so trivial. A temporary solution is to set the permissions of /var/lib/php/session to 777 - I have a feeling that's not the "best practice" though. What is the best practice when you need to assign a daemon write access to a folder, but it is running as nobody ?

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • RPi and Java Embedded GPIO: Java code to blink more LEDs

    - by hinkmond
    Now, it's time to blink the other GPIO ports with the other LEDs connected to them. This is easy using Java Embedded, since the Java programming language is powerful and flexible. Embedded developers are not used to this, since the C programming language is more popular but less easy to develop in. We just need to use a dynamic Java String array to map to the pinouts of the GPIO port names from the previous diagram posted. This way we can address each "channel" with an index into that String array. static String[] GpioChannels = { "0", "1", "4", "17", "21", "22", "10", "9" }; With this new dynamic array, we can streamline the main() of this Java program to activate all the ports. /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter[] commandChannels; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); for (String gpioChannel : GpioChannels) { System.out.println(gpioChannel); // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); } And, then simply add array code to where we blink the LED to make it blink all the LEDS on and off at once. /*** Send commands to GPIO port ***/ commandChannels = new FileWriter[GpioChannels.length]; for (int channum=0; channum It's easier than falling off a log... or at least easier than C programming. Hinkmond

    Read the article

  • Oracle SOA Suite 11g Administrator's Handbook

    - by Antony Reynolds
    SOA Administration Book I have just received a copy of the “Oracle SOA Suite 11g Administrator's Handbook” so as soon as I have read it I will let you know what I think.  In the meantime the first thing that struck me was the author credentials, although I have never met either of them as I remember, I have read Admeds blog postings and they are a great community resource, so immediately I am well disposed towards the book.  Similarly Arun is an employee of my friend and co-author Matt Wright, and I have heard good things about him from Rubicon Red people. A first glance at the table of contents looks encouraging, I particularly like their approach to performance tuning where they give a clear concise explanation of the knobs they are using. More when I have read more.

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    Hi, I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS drive. When I write a file to the device using eSata, or a SMB share I cannot write files over 2GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • .pam_environment in kerberized nfs4 home directory

    - by Paul Stoever
    How can I get pam_env to read the user's .pam_environment file, if the user's file is located in a kerberized NFS4 mount? The file and directory permissions for the .pam_environment file are set in a way, that allows the local root to read the file. Reading .pam_environment only fails on the first login. Subsequent logins successfully read the file. The client uses Ubuntu 12.04 Desktop, NFS/Kerberos server is 12.04 Server. The Kerberos/NFS4 stuff works with exception of this. From /var/log/auth for first login: ... lightdm: pam_krb5(lightdm:auth): user USERNAME authenticated as USERNAME@REALM lightdm: pam_unix(lightdm:session): session closed for user lightdm lightdm: pam_env(lightdm:setcred): Unable to open config file: USERHOME/.pam_environment: Permission denied lightdm: pam_env(lightdm:setcred): Unable to open config file: USERHOME/.pam_environment: Permission denied lightdm: pam_unix(lightdm:session): session opened for user USERNAME by (uid=0) ...

    Read the article

  • Samba network sharing NTFS drives and root permissions from local drives

    - by Bill
    I'm able to share my internal 2ndry NTFS drives (sdb1,2 and 3) on the network with Windows computers now but even though Samba read/write is enabled, Windows network computers can only open files "read-only" and can't save files to the samba shared drives/folders. I try to set permissions in Ubuntu via folder and/or file properties even logged in root via Nautilus but all the samba shared folders and files are set as owner = root, accessible and does not allow me to change them to read/write, it just resets to root, accessible, in other words, I can't change permissions. I'm running Ubuntu 11.04 Gnome on an old Dell Dimension 2400. Also, in order to for me to copy or move any files from the Ubuntu drive to the sdb1,2 or 3 drives, I have to gksu nautilus. This consequently prevents me from copying .ISO files to my "Multisys" thumb drive too.

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • How to profile LINQ to Entities queries in your asp.net applications - part 3

    - by nikolaosk
    In this post I will continue exploring ways on how to profile database activity when using the Entity Framework as the data access layer in our applications. If you want to read the first post of the series click here . If you want to read the second post of the series click here . In this post I will use the excellent (best tool for EF profiling) which is called Entity Framework Profiler. You can download the trial - fully functional edition of this tool from here . I will use the previous example...(read more)

    Read the article

  • Empty interface to combine multiple interfaces

    - by user1109519
    Suppose you have two interfaces: interface Readable { public void read(); } interface Writable { public void write(); } In some cases the implementing objects can only support one of these but in a lot of cases the implementations will support both interfaces. The people who use the interfaces will have to do something like: // can't write to it without explicit casting Readable myObject = new MyObject(); // can't read from it without explicit casting Writable myObject = new MyObject(); // tight coupling to actual implementation MyObject myObject = new MyObject(); None of these options is terribly convenient, even more so when considering that you want this as a method parameter. One solution would be to declare a wrapping interface: interface TheWholeShabam extends Readable, Writable {} But this has one specific problem: all implementations that support both Readable and Writable have to implement TheWholeShabam if they want to be compatible with people using the interface. Even though it offers nothing apart from the guaranteed presence of both interfaces. Is there a clean solution to this problem or should I go for the wrapper interface? UPDATE It is in fact often necessary to have an object that is both readable and writable so simply seperating the concerns in the arguments is not always a clean solution. UPDATE2 (extracted as answer so it's easier to comment on) UPDATE3 Please beware that the primary usecase for this is not streams (although they too must be supported). Streams make a very specific distinction between input and output and there is a clear separation of responsibilities. Rather, think of something like a bytebuffer where you need one object you can write to and read from, one object that has a very specific state attached to it. These objects exist because they are very useful for some things like asynchronous I/O, encodings,...

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • Why doesn't VisualSVN enforce credentials correctly?

    - by mrt181
    I have a svn repository that is managed by VisualSVN. I have created a new group and added two new users to that group. When i attach this group to an existing repository and set the rights to Read/Write, these rights do not work on subdirectories. i have to set the rights on every subdirectory. but even then, the users of this group can only read the repository, they can't write anything to it. It works for the new users when i create a new repository. The users use tortoisesvn and get a message like this when they try to write to this repository for example https://myserver:8443/svn/subdir/Application/trunk access to /svn/subdir/!svn/act/76a4c6fd-fa15-594a-a419-18493dacaf51' forbidden

    Read the article

  • Ubuntu 13.10 not playing DVD videos

    - by John Hill
    I installed Ubuntu Gnome 13.10 on my computer yesterday, and everything seems to be working fine so far, except that it isn't reading video or audio CD's and DVD's. At first, I inserted a DVD video, and it played normally for maybe 30-45 seconds before coming up with a "internal read error" or something like that. I was using the Totem player when the problem occurred, so I tried installing the VLC media player. It wouldn't read the disc at all, so I uninstalled the player. Now, the Totem player won't even begin playing the DVD. The player sometimes doesn't even open up when I insert the DVD, and other times it will but says it can't read it. I've tried several different DVD's and CD's with similar results. The computer is recognizing the optical drive, because when I open "Files", it shows the drive and the disc, but can't play it. Previously, I ran Ubuntu 13.04 with the Gnome desktop installed from the software center, and I had no issues. Any help is appreciated!

    Read the article

  • Standards & compliances for secure web application development?

    - by MarkusK
    I am working with developers right now that write code the way they want and when i tell them to do it other way they respond that its just matter of preference how to do it and they have their way and i have mine. I am not talking about the formatting of code, but rather of way site is organized in classes and the way the utilize them. and the way they create functions and process forms etc. Their coding does not match my standards, but again they argue that its matter of preference and as long as goal achieved the can be different way's to do it. I agree but their way is proven to have bugs and we spend a lot of time going back and forth with them to fix all problems security or functionality, yet they still write same code no matter how many times i asked them to stop doing certain things. Now i am ready to dismiss them but friend of mine told me that he has same exact problem with freelance developers he work with. So i don't want to trade one bad apple for another. Question is is there some world wide (or at least europe and usa) accepted standard or compliance on how write secure web based applications. What application architecture should be for maintainable application. Is there are some general standard that can be used for any language ruby php or java govern security and functionality and quality of code? Or at least for PHP and MySQL i use for my website. So i can make them follow this strict standard and stop making excuses.

    Read the article

  • Media Drive Permissions

    - by Wade Wofford
    I just switched from a Hackintosh to Linux, and am trying to make sense of it. On my hackintosh, I partitioned a big drive into 3 parts--1 which holds music, 1 for film/tv, and one for the OS. I installed Ubuntu onto the OS partition, and am now trying to make it so I can write to the media drives. I've searched around and tried several things. I tried gksu nautilus in Terminal, which brought me into root permissions. When I select a folder and try to change permissions, I get "The owner could not be changed...Error setting owner: Read-only file system" Ultimately, I have two specific aims: - I want to make it so I can write to the film/tv drive from the ubuntu machine only - I want to make it so I can write to the music drive from the ubuntu machine, or any other machine on the network (all Macs). That is, I want a single music library (an iTunes file) that will serve all Mac laptops/iPads/iPhones on the network, but which XBMC on the Ubuntu machine can also see / read from. Music will be added to the iTunes library via a single Mac laptop, but all other devices should be able to see the music drive.

    Read the article

  • UDF Partition reported full when it is not

    - by Capt.Nemo
    I was using these instructions to setup an external hard disk with udf. I have been able to setup a multi-partition system using those instructions, but I seem to have hit a wall, where the partition is reported as full while writing to the disk. Every other tool available to me reports it as free. Relevant lshw output Here's a screenshot showing the disk: Both the output of df and the file manager (caja) report the disk as free. Filesystem Size Used Avail Use% Mounted on /dev/sda9 9.0G 7.6G 910M 90% / udev 974M 12K 974M 1% /dev /dev/sda1 50G 47G 295M 100% /media/Data /dev/sda6 49G 41G 5.9G 88% /home /dev/sda2 155G 127G 29G 82% /media/Entertainment /dev/sda8 14G 13G 516M 96% /media/Stuff /dev/sdb2 120G 1.9G 112G 2% /media/3c887659-5676-4946-875b-b797be508ce7 /dev/sdb3 11G 2.6G 7.7G 25% /media/108b0a1d-fd1a-4f38-b1c6-4ad1a20e34a3 /dev/sdb1 802G 34G 768G 5% /media/disk I seem to have hit a wall near the 35GB mark. Despite being shown as 35gb/860gb used everywhere, the following happens on a write attempt: [2017][/media/Dory]$ echo D>>echo bash: echo: write error: No space left on device Writing byte by byte, the maximum I can take it to is 34719248K. The most weird part is that on mounting it Windows, Windows can write to the disk easily, and the writes are being read fine back in Ubuntu. However, the used-bytes remains at 34719248K in Ubuntu (It goes higher on Windows, however).

    Read the article

  • "A", "an", and "the" in method and function names: What's your take?

    - by Mike Spross
    I'm sure many of us have seen method names like this at one point or another: UploadTheFileToTheServerPlease CreateATemporaryFile WriteTheRecordToTheDatabase ResetTheSystemClock That is, method names that are also grammatically-correct English sentences, and include extra words purely to make them read like prose. Personally, I'm not a huge fan of such "literal" method names, and prefer to be succint, while still being as clear as possible. To me, words like "a", "an", and "the" just look plain awkward in method names, and it makes method names needlessly long without really adding anything useful. I would prefer the following method names for the previous examples: UploadFileToServer CreateTemporaryFile WriteOutRecord ResetSystemClock In my experience, this is far more common than the other approach of writing out the lengthier names, but I have seen both styles and was curious to see what other people's thoughts were on these two approaches. So, are you in the "method names that read like prose" camp or the "method names that say what I mean but read out loud like a bad foreign-language-to-English translation" camp?

    Read the article

  • Why is math taught "backwards"? [closed]

    - by Yorirou
    A friend of mine showed me a pretty practical Java example. It was a riddle. I got excited and quickly solved the problem. After it, he showed me the mathematical explanation of my solution (he proved why is it good), and it was completely clear for me. This seems like natural approach for me: solve problems, and generalize. This is very familiar to me, I do it all the time when I am programming: I write a function. When I have to write a similar function, I generalize the problem, grab the generic parts, and refactor them to a function, and solve the original problems as a specialization of the general function. At the university (or at least where I study), things work backwards. The professors shows just the highest possible level of the solutions ("cryptic" mathematical formulas). My problem is that this is too abstract for me. There is no connection of my previous knowledge (== reality in my sense), so even if I can understand it, I can't really learn it properly. Others are learning these formulas word-by-word, and get good grades, since they can write exactly the same to the test, but this is not an option for me. I am a curious person, I can learn interesting things, but I can't learn just text. My brain is for storing toughts, not strings. There are proofs for the theories, but they are also really hard to understand because of this, and in most of the cases they are omitted. What is the reason for this? I don't understand why is it a good idea to show the really high level of abstraction and then leave the practical connections (or some important ideas / practical motivations) out?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >