About Me

December 7th, 2012

My name is Chris Kranz. I am now working for the leading UK system integrator Kelway, concentrating on storage and virtualisation (with a focus on NetApp, EMC and VMware), I head up the Solutions Architect team for storage and virtualisation.  I started life in the late 90’s as a web developer, so I know how to script, and pull things apart. I write a lot of scripts to help out with the simple tasks in life as I get bored of the mundane very quickly. Anything is possible when it comes to computers, it just comes down to how much time it’ll take (and ultimately how much money that will cost to do!). I am a Solutions Architect these days, which means I spend a lot of time talking with customers, talking through solutions and designing strategies. I am very proud to say I am a VCDX (one of the first 50 globally), and I am greatly humbled by the other architects I share this qualification with (www.vmware.com/go/vcdx). I also hold a variety of qualifications in the key areas I focus on, NetApp NCDA and NCIE, EMC Proven Professional, VMware VCP, VTSP, VCAP and of course VCDX.

I live in sunny Birmingham in the UK, and you’ll find me at any point driving up and down the country in my trusty Phaeton. You may spot me from the number plate!

I have learnt a lot from my 2 older brothers who are major Solaris guys, anything they don’t know about Solaris, isn’t worth knowing. I’m constantly quizzing them and others about anything and everything and I’m always listening and trying to learn. If you’ve ever dealt with a Kranz, you’ll know what I mean :) Check out Tom over at www.siliconbunny.com

I want to try give back to the community, to the people that have helped me get to where I am. Feel free to ask me any questions. I am also available for consulting and contracting roles through my employer Kelway, just give me a shout.

  1. tinku
    | #1

    how to set filer password empty?
    but NDMP copy should happen successfully..

  2. | #2

    I can’t say I’ve ever tried to set the root password as empty, and can’t say I’d recommend it either. If you are using “ndmpcopy” you can define the source and destination credentials with “-sa username:password” and “-da username:password”.

  3. Richard D
    | #3

    Hello Mr. Kranz, my name is Richard Dixon. I am currently a college student at NIU in Dekalb, Illinois in the U.S.

    I wanted to ask if you have any advice for someone looking to break into the Storage Networking Industry as a Career? I would greatly appreciate anything you have to share. Thank you.

  4. | #4

    Best bet is to start out in the industry. Most of my skills come from being self taught and working at B2net. Maybe find a storage vendor or reseller in your area and see about getting some work experience.

    I always find studying difficult if I haven’t got a project to work towards, so reading books and studying manuals might not be the best way to learn properly. Besides, it’s exceptionally boring doing that!!!

  5. | #5

    Hi, Chris:

    Do you know of any NetApp-savvy guys that would be available to do telephone consulting and support on an ad hoc basis? — basic things from how to install Ontap to more advanced troubleshooting. Many thanks; enjoyed your website.

    Scott

    Scott Fischmann
    Union Computer Exchange, Inc.
    7600 West 27th Street
    Building B1
    Minneapolis, Minnesota 55426
    scott@unioncomputer.com
    952.935.7282 – Office
    952.240.6835 – Mobile

    “Helping our customers make each dollar go further – since 1991.”

  6. | #6

    Hi Scott, B2net can certainly provide that service for you, we not only have a 24/7 support desk who are well trained in all NetApp products, but also a team of very skilled and talented engineers. If you are asking about independent consultants, I’m afraid I really don’t get any exposure to them as we have some industry leading skills internally and rarely need to engage in 3rd parties. I’d be happy to arrange for someone to contact you to discuss the possibility of ad-hoc support further as it’s certainly something we can offer.

  7. Rajan
    | #7

    Hi,

    Can you tell me how to trigger test incident/event ticket ?
    Just wanted to know whether we have that feature in NetApp Box.

  8. | #8

    You mean AutoSupport? Yeah, you can do this either from the FilerView or from the CLI. From the CLI just do…

    options autosupport.doit “text string here”

    … and replace “text string here” with whatever message you want NetApp to react to, usually a case number.

  9. Ron
    | #9

    Hi,
    I wanted to ask you how to destroy a LUN if it is in used?
    I have an issue in one of our N5600 which runnning the following command will generate the following error as shown below:
    n5600a> lun destroy -f /vol/PRR_VOL01/lun01
    lun destroy: /vol/PRR_VOL01/lun01 : The LUN is busy, stop IO before attempting to destroy the LUN

    This LUN is already unmapped and offline. Also all the snap mirror deleted. Because of this I can’t delete the VOL and it causing issue in our FIlerview manage volume (seeing the the Error: Volume(s) Operation Failed. Volume busy. Please retry the operation.)

    Your help is highly appreciated.
    thanks.

  10. | #10

    Hi Ron,

    Do you have (or did you have) any LUN clones in the past? Perhaps these have become locked in a snapshot and you’ve since deleted the LUN clone but the clone still has the LUN locked. Check the snapshots of the volume and see if any are locked. If you are deleting the LUN, is there anything else in the volume? If you offline the volume, you’ll definitely remove all links to the LUN. Then you could just delete the volume and re-create it.

  11. Casey
    | #11

    Chris,

    I just put new disks in my controller. Disk auto assign is turned on. How do I turn auto assign off so I can assign half of these disks to the other filer?
    disk assign 0b.30 0b.29 0b.28 0b.27 -s unowned -f
    I found this command but when I run it it puts the disks in an unowned state for only a couple seconds then it reassigns them the the filer.

  12. | #12

    To turn off disk auto assign, do the following…

    options disk.auto_assign off

    and then un-own those disks again. You should be good to go then!

  13. Kurt
    | #13

    Hi Chris,

    The aggregates are 100% full. I don’t see any Hot spindles, volumes have sufficient space. Can there be a performance impact ?

    This is a vague question, might have explained before,we are facing lots of NetApp performance issues lately.

    Can you please give me some heads up?

  14. | #14

    Hi Kurt,

    The problem when you have a full aggregate is that this affects writes as the first issue. Normally WAFL queues up writes to stripe across all the disks and it tries to do this with as large a stripe as possible as it’s the optimal way to both write, and to read later. With an aggregate at 100%, there is little room to write large stripes, so it has to break these writes into smaller chunks and write into the small amounts of free space that is available. This takes writes longer, but more importantly it has a huge impact on reads. Reads now need to do more physical spindle movement to perform read-aheads or even just a simple sequential read which is now laid across the spindle rather than in a nice tight sequence.

    Running your aggregate to 100% not only affects immediate write performance, but it will continually affect read performance for any data that was written within the time scale that the aggregate was near full, or full. You need to reduce your aggregate usage (down to less than 80% is recommended) and then reallocate the volumes to distribute the data across the spindles in a more orderly fashion once again. This will hugely improve read performance going forward, and the free space will allow the write performance to one again perform at optimal speeds.

    With a 100% full aggregate, you often don’t see a disk utilisation performance issue, but you can see high CPU and you can see “CP ty” (Consistency Point Type) taking a long time to flush to disk. You can see this from “sysstat -u 1″. It is highly dependent on the system model and the type of data you are writing, but a very rough rule of thumb is that if a CP is taking more than say 3-4 seconds, it’s working harder than it should do. But as I say, if you have a 100% full aggregate, there’s little short term software work you can do to alleviate the problem, the fix is simple, more disk or less data. So buy some more spindles and add them to the aggregate, then reallocate. Or delete some data / snapshots and free up some space on disk and then reallocate.

  15. borokini
    | #15

    Hi can u help me on this issue. my aggrgeate is offline and below are the results when I do aggr status on d filer aggr status -r

    Aggregate vol0 (online, raid4) (block checksums)
    Plex /vol0/plex0 (online, normal, active)
    RAID group /vol0/plex0/rg0 (normal)

    RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phy
    s (MB/blks)
    ——— —— ————- —- —- —- —– ————– —
    ———–
    parity 8b.21 8b 1 5 FC:B – FCAL 10000 68000/139264000 695
    36/142410400
    data 8a.16 8a 1 0 FC:A – FCAL 10000 68000/139264000 695
    36/142410400

    Aggregate aggr1 (failed, raid4, partial) (block checksums)
    Plex /aggr1/plex0 (offline, failed, inactive)
    RAID group /aggr1/plex0/rg0 (partial)

    RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phy
    s (MB/blks)
    ——— —— ————- —- —- —- —– ————– —
    ———–
    parity 8b.23 8b 1 7 FC:B – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8b.24 8b 1 8 FC:B – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8a.25 8a 1 9 FC:A – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8a.26 8a 1 10 FC:A – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8b.28 8b 1 12 FC:B – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data FAILED N/A 68000/139264000
    data 8a.17 8a 1 1 FC:A – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8b.18 8b 1 2 FC:B – FCAL 10000 68000/139264000 695
    36/142410400
    Raid group is missing 7 disks.

    RAID group /aggr1/plex0/rg1 (partial)

    RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phy
    s (MB/blks)
    ——— —— ————- —- —- —- —– ————– —
    ———–
    parity FAILED N/A 68000/139264000
    data 8b.19 8b 1 3 FC:B – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8a.20 8a 1 4 FC:A – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8b.22 8b 1 6 FC:B – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8a.27 8a 1 11 FC:A – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    data 8b.29 8b 1 13 FC:B – FCAL 10000 68000/139264000 695
    36/142410400
    data FAILED N/A 68000/139264000
    Raid group is missing 6 disks.

    Spare disks (empty)

    physically all disk are with dull green indicator

    thanks for your response.

  16. borokini
    | #16

    how can i bring it back online

  17. | #17

    It looks like you have a large number of disks missing or failed from the environment. These need fixing before you can bring the aggregate back online again. You need to check that the disks are connected properly. The best way to achieve this is probably to power down the system and ensure all cabling is fully connected and secure, and that all disks are properly seated. Hopefully something like a loose cable has caused the filer to fail these disks rather than an actual data or mechanical failure of all those disks. Check all the connectivity first, then you might be able to unfail those disks if nothing is actually wrong with them.

    However if the disks are fully failed, then I’m afraid you could be in quite a situation.

    I would strongly recommend you contact NetApp Global Support as they will be able to walk you through the process of checking these disks and if possible repairing the aggregate. It may be a known bug and an easy fix, but they are in the best position to diagnose this.

  18. Kurt
    | #18

    Thanks a lot for the reply Chris.

    We have a set up where Exchange DBs are running on the NetApp iSCSI Luns. There Exchange Servers are on ESX servers. there many different LUNs on different aggregates, some RUN sql, some run different DB apps. I see lot of Hot spindles on the NetApp.

    Which is the better way of mapping a LUN to a exchange VM
    1. (Snapdrive + iSCSI initiator) from VM
    2. Raw Mapped LUNs, which will be assigned as Datastores

    Thanks again for the reply, it was most helpful.
    1. @Chris Kranz

  19. Kurt
    | #19

    Hi Chris, please ignore the question, I must have beem out of mind when I asked this!!!!!!@Kurt

  20. borokini
    | #20

    Hi,
    Thanks for your advice it really work.

    The cable connecting one of the shelf was replaced because it is bad.
    the aggr are back online now.

  21. | #21

    Hi Kurt,

    I’ll give you a quick response, although sounds like you have things sorted now.

    If you want to use SnapManager for Exchange, then you really have 2 different ways to present the storage to the Exchange VM.

    1) Connect to them using iSCSI software initiator within the VM
    2) Connect to them using RDMs on the ESX level and present a raw LUN to the VM. This can be done using either FCP or iSCSI.

    The main advantage of option 1 is granularity of control. The Exchange admin doesn’t also need to be a VMware admin in order to manage and control his storage. Arguably everyone should have some VMware knowledge though. The downside to option 1 is that if you have a lot of VMs requiring storage this way, it’ll be an independent software initiator installed for everyone, and this has both management and CPU overheads.

    The advantage of option 2 is that you are centralising the storage connectivity. Regardless of the use it is always done at the VMware level, this can give you better security and visibility of who is using what storage. The advantage of using iSCSI at the host level is that it is only one instance of the software initiator. With FCP this disappears altogether. The main downside of option 2 is the reverse of option 1, the Exchange admin doesn’t get direct control of his storage, however with SnapDrive you do still get quite good control, you can still clone, grow, snapshot and so on from within SnapDrive.

    Presenting storage to a datastore then carving a VMDK is probably worst case as you don’t get any SnapDrive or SnapManager integration from Exchange, but you still have the overhead of connecting from VMware. I think it may just be a typo in your question as it looks like you were deciding between RDM from ESX and iSCSI initiator within the VM. My personal favourite is using RDMs although there may be additional caveats or changes if you are also doing DR. VMware SRM deals well with RDMs however, but doesn’t work at all with iSCSI initiators within the VM.

    Hope you have all things sorted already however!

  22. Kurt
    | #22

    Hi Chris,

    Thanks a Lot for your reply.

    there 100s of VMs which are running iSCSI Initiators, in our setup,Along with SQL DBs, Mail DBs. It seems there is a lot of I/O happening on a SQL DB is in turn effecting the mail server performances.

    I do think, it is better to separate the mail DBs from the array which is serving the SQL DBs.

    Thanks again for your reply and insights.

    Blogs like yours are invaluable for people like us who are still exploring the complexities of Storage.

    Regards,

    Kurt

  23. | #23

    Hi Kurt,

    If you have a cluster, I’d look to put the SQL DB’s on one node, and the Mail DB’s on the other node. The logs then sit on the opposite system again. This really helps to balance the load across a cluster, gives you more spindles to use and gives you a level of physical separation. It also helps you do some level of damage control with the system separated out like this. However I wouldn’t have a system dedicated to SQL, and the other to Exchange, you want to balance things like logs out. It’ll give you more spindles for a single application, and you can still apply a level of control.

    Also look at Storage IO Control from the VMware side of things (great way to control IO if you have misbehaving machines), and also look at putting different priorities on the NetApp volumes. You can set these from very low, low, medium, high and very high. This works in a similar way to VMware shares, but you put them at the volume level and have some level of control over the IO and performance that each volume gets delivered. For example you might want to limit CIFS users in favour of a SQL database. It may be that everything stays on medium (the default) but you put the Mail DB on very high to give it a better share of resources.

    Thanks for the feedback, it’s just a shame I don’t get more time to commit more topics!

  24. Kurt
    | #24

    Hi Chris, Thanks for the reply.

    I was looking at the priority from sometime. what my fear is, a wrongly applied priority might make the system worse.(I think so).

    Other thing is as per my understanding, if I want to enable priority then I might have to enable it on the all the volumes in every aggreagte individually right?

    I will surely consider migrating the MAIL DBs on to a cluster partner and SQL in the other, to try balancing.

    Also, do you advice for a resource full filer, priority enabled? , i.e. If aggregates are full, enabling priority will impact in anyway?

    Thanks again for the reply.

    best Regards,

    Kurt

  25. | #25

    When you enable priority, it sets a default (medium) policy across all volumes, so you don’t need to do it individually. You will need to individually set differences to this as required however.

    Priority can help a system that struggles to compete with things like SnapMirror as it gives priority over system tasks. As with any sort of resource sharing tool (same as I tell my VMware customers) restricting resources on a busy system will of course affect things. It’ll make a bad situation better for the volumes you give priority to, but it’ll make it a lot worse to all the other volumes. A system under heavy load already that is having performance problems isn’t the best candidate for resource shares or priority limits. I’d recommend you address the performance issues first or start looking to limit the load from the application side before placing priority across all volumes.

    Remember also that priority only comes into play when there is resource being asked for. So if for example you want to give more resources to SQL and Exchange, and the only other resources on that system is lightly used CIFS shares, you’ll probably gain little benefit as little IO is being generated from the CIFS shares. Giving all priority to SQL and Exchange will still result in their constraint with eachother.

  26. Kurt
    | #26

    Hi Chris,

    Thanks for the reply.

    I am currently planning the Priority. It seems there are lot of Test DBs and staging DBs on the aggregate and they are on NFS!. I am re-evaluating the setup.

    Thanks for your inputs. it helped me a lot to get a perspective

    Regards,

    Kurt

  27. Michael Parker
    | #27

    Hi Chris,

    Great site and wonderful service you do for the community! I am in a bit of a bind here. I created a flexclone from a snap in a VSM destination. I then initiated a vol clone split to split it off. The problem is that the source has deleted the snap. Now when the VSM attempts to update, it fails because it cannot delete the destination snap. From the status of the split, it’ll be days before it is complete and I cannot afford to wait that long as the VSM is part of a backup process for our oracle environment. Is there anyway to make the split work faster or get around this?

    Thanks in advance,

    Michael

  28. | #28

    Hi Michael, sorry for the delay in getting back to you. You could try to perform a SnapMirror resync, however the chances are high that this may now require a new baseline. Additionally if VSM can’t delete the destination snap, then chances are that the flex clone still has this locked somehow. The split process can take some time to complete, so you may have been trying to do this too quickly after initiating the split.

    Unfortunately when SnapMirror gets stuck or confused, the only real option is a baseline unless some similar snapshots still exist. I’ve been in many awkward situations where I’ve had to re-baseline a destination because of snapshots being deleted for one reason or other.

  29. Anton
    | #29

    @Chris Kranz

    I wanted to bump this;
    I am having the same issue however i cannot offline the volume because it tells me that the volume is busy.
    I cannot delete the lun beacause the system tells me The LUN is busy, stop IO before attempting to destroy the LUN. The Lun is being backed by a snapshot, and the snapshot is vclone,busy state.
    Every time i try to take the volume offline I get an error in filerview and a freeze in SSH for like good 15 minutes. I cannot split the clone either, tells me IO busy. What else can you recommend ? thx

  30. Anton
    | #30

    BTW this is in reply to #9

  31. | #31

    Apologies for missing your comment!

    Can you give me a wider picture of the volume/LUN configuration. Is the volume in question a FlexClone, or is it being FlexCloned? If not, have any LUN clone operations been done on it, either manually or from a SnapManager job? When you show the snapshots for the volume, what does it show? Any particular snapshot showing as locked or busy? is the LUN mapped at the moment?

  32. John
    | #32

    Hi Chris ,

    I would like to know why/when is Lun clone and Flex clone used.
    I know LC is free and FC requires license.but apart from that how are they different and when do we opt for either.

    Thanks
    John

  33. | #33

    LUN clone makes a clone of the LUN within the same volume as the original. This means that if subsequent snapshot operations happen after the clone is created, this actually locks the LUN clone which is then referenced within a snapshot so cannot be deleted. This can cause some schedule and administrative challenges.

    FlexClone is much more flexible in these terms as it creates a clone of a particular snapshot of the entire volume. Although this would lock an individual snapshot, it does not affect any future snapshot operations, and the locking mechanism only affects the normal routine tasks if the FlexClone is retained longer than the normal snapshot retention period.

    FlexClone is a much more dynamic and admin friendly technology and is definitely the recommended approach. It takes a lot of the hassle out that LUN clone can often cause.

  34. | #34

    Small world syndrome.. found your site via googling a man page for Qtrees and only later checked out this page.. oh look, a Kranz.. could it be? had Tom working for me down in Gibraltar a little while back – be sure to amuse him with this greeting :-)

  35. | #35

    There’s not too many Kranz’s around :) I’ll tell Tom next time I see him!

  36. Jon Swan
    | #36

    Hi Chris, I found yuor website and thought Id give this a whirl !

    I have the following problem as described in the attached link

    http://communities.netapp.com/thread/13850

    Do you know how to overcome this

    Thanks

    Jon

  37. | #37

    Looks like you’ve hit a known bug in VSC. Only real solutions are the scripted approach to fix the naming (which seems to work for some people), or waiting for NetApp to fix the code in VSC to prevent this happening!

  38. Kurt
    | #38

    Hi Chris,

    From sometime on my NetApp file server shares I am contemplating the usage of User quotas.

    When I enable the user quota on the qtree, for a user it shows usage is 15GB.

    But the users shows that his folder size is 7GB. My understanding is, though his folder is of 7GB, there might be files around in other users’ folders but owned by him.

    So user quota recognises the usage of the user by file ownership ?,

    Is Data Ontap capable of seeing those file or listing, as per the ownership?

    And there is no way I can set different notifications for different qtrees?

    Regards,

    K

  39. Manoj
    | #39

    Im getting the below message, Im unable destory the lun.

    The LUN is busy, stop IO before attempting to destroy the LUN.

    I saw below reply for issue on your fourm.
    —————————————-
    February 15th, 2011 at 17:08 | #10 Reply | Quote Hi Ron,

    Do you have (or did you have) any LUN clones in the past? Perhaps these have become locked in a snapshot and you’ve since deleted the LUN clone but the clone still has the LUN locked. Check the snapshots of the volume and see if any are locked. If you are deleting the LUN, is there anything else in the volume? If you offline the volume, you’ll definitely remove all links to the LUN. Then you could just delete the volume and re-create it.
    ————————————–

    1. Lun is cloned long ago, volume have the recent snapshot only.

    2. I checked with >lun usage command
    No dependency on the snapshot.

    I tried deleting all the snapshots, but still im unable to delete it.

    Can you send me the steps for deleting the LUN.

    Thanks in advance.

  40. | #40

    And there are no snapshots against the volume containing the LUN? Is the LUN mapped to any initiators? Does the volume host any other LUNs? From the stats command or lun stats command, do you see any activity going to this LUN?

  41. Ghostwire
    | #41

    Chris,

    I just wanted to take the time and thank you for posting this blog. It has been very helpful for learning the basics of netapp. Thank you again for taking time to keep up with this blog.

  42. David
    | #42

    Hi Chris.

    Do you know someone that use Netapp + Northern Quotas?
    Did you have any problems (high CPU 1 /kahuna domain) using Northen quota server or other quota system?
    ¿How can i identify what proceess use the cpu resources in CPU1 / kahuna domain)?.

    Thanks in advance.

  43. | #43

    Yes, and yes I have known some performance issues. Newer versions of ONTAP improve on the use of the kahuna domain, but it does depend. It could be that simply the amount of through-put you are putting on the NetApp is too much for the size of controller. The way that ONTAP generally works, is that it’ll only commission a new CPU (core / socket) when usage is over 80/90% of the existing CPUs, so you can see a single CPU being used and all other CPUs being idle. That doesn’t mean it’s doing anything wrong. I’d use the stats command to interrogate a little more into which CPUs and domains are being used.

  44. Joseph
    | #44

    Hi Chris,
    I have a doubt in netapp ,How to check how many process are running on filer.As well as how to check backgroud process if anything in filer.

  45. | #45

    The NetApp systems doesn’t really work that way. You can use the statit and stats command on the CLI, however you can’t do much about things like background processes as that’s part of the system. The CPUs will only get enabled when they are required, so a system not particularly busy could be using a single CPU up to 80% or peak a little higher. Due to the overhead of SMP, it doesn’t enable multiple cores or CPU sockets unless there is an actual requirement to do so.

  46. Joseph
    | #46

    Hi Chris,

    Can you advice me for LUN migration tool. need to migrate from Netapp to Hp.

  47. | #47

    There’s no simple way to migrate a LUN from one storage vendor to another (often no easy way from one system to another within the same vendor!). I’m afraid the best method is going to be through host copies, either using the application itself, some sort of logical volume manager to replicate the volume, or a basic file copy with robocopy/xcopy/rsync/etc.

  48. Sumit
    | #48

    Hi Chris

    I am getting error while mounting a Nfs share in LInux machine when ever i give the command [root@Linux ~]# showmount -e 192.168.10.70
    mount clntudp_create: RPC: Port mapper failure – RPC: Unable to receive
    [root@Linux ~]# rpcinfo -p
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100024 1 udp 961 status
    100024 1 tcp 964 status
    100021 1 udp 1026 nlockmgr
    100021 3 udp 1026 nlockmgr
    100021 4 udp 1026 nlockmgr
    100021 1 tcp 3504 nlockmgr
    100021 3 tcp 3504 nlockmgr
    100021 4 tcp 3504 nlockmgr

    Can u please suggest me what to do as i am beginner in Netapp & linux.
    Also can u recommend any Linux and Netapp material which help me in learning the same

    Sumit

  49. Prem
    | #49

    Hi Chirs,
    I have a quick query.We are planning to migrate data from 7-mode to clustered solution(8.1.2 code) without using DTA and VTW.We planned to use robocopy for windows shares.However,the issue is ACLs.Users use permission inheritance in different levels.Is there any way we could do ndmpcopy from 7-mode to clustered solution?

    Thanks,
    Prem S.

  50. | #50

    Hi Prem,

    I’m not sure if you can use NDMP copy, but why can’t you use robocopy for this? Robocopy can be set to preserve the ACLs and permissions. Alternatively you can also do this pretty simply with PowerShell (get-acl / set-acl).

Comment pages
  1. No trackbacks yet.



This site is not affiliated or sponsored in anyway by NetApp or any other company mentioned within.
%d bloggers like this: